Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: oracle connector #310

Merged
merged 5 commits into from
Jun 28, 2024
Merged

feat: oracle connector #310

merged 5 commits into from
Jun 28, 2024

Conversation

tonylyjones
Copy link
Contributor

@tonylyjones tonylyjones commented Jun 27, 2024

Summary by CodeRabbit

  • New Features

    • Introduced a new Oracle connector library for connecting to Oracle databases, extracting schemas, tables, columns, relations, and types, and formatting them into a JSON Schema.
    • Added support for Oracle connector in the gateway service.
  • Documentation

    • Added README.md with detailed instructions on development, usage, and publishing of the Oracle connector library.
  • Tests

    • Configured Jest for testing the Oracle connector library.
  • Dependencies

    • Updated package.json files to include necessary dependencies and scripts for the Oracle connector library.
    • Added @azimutt/connector-oracle as a dependency in the gateway package.

Copy link

coderabbitai bot commented Jun 27, 2024

Walkthrough

Azimutt introduces a new Oracle connector library designed for connecting to Oracle databases, extracting database schemas, and converting them into JSON Schema. Core functionality includes schema extraction, history tracking, and statistical analysis. The package is integrated into the gateway package for broader application use, complete with Jest testing configurations and updated TypeScript settings to support development and deployment.

Changes

File/Directory Change Summary
libs/connector-oracle/README.md New file detailing the Oracle connector library, usage instructions, and development guidelines.
libs/connector-oracle/src/ Added Oracle connector functionality files: connect.ts, constants.test.ts, helpers.ts, history.test.ts, history.ts, index.ts, oracle.test.ts, oracle.ts, query.ts, stats.test.ts, along with related tests.
libs/connector-oracle/jest.config.js New Jest configuration file specifying transformations, Node.js environment, test file pattern, and module extensions.
libs/connector-oracle/package.json Updated package.json with new dependencies, scripts for testing and building, author and repository information, and main/types files.
libs/connector-oracle/tsconfig.json Updated TypeScript configuration targeting ES2021, using commonjs module, defining root directory and output directory, and enabling consistent file naming and strict checks.
gateway/package.json Added @azimutt/connector-oracle as a dependency to the gateway package.
gateway/src/services/connector.ts Modified to import the Oracle connector from @azimutt/connector-oracle and updated the connectors object to include this new connector.

Poem

In burrows deep where data sleeps,
The rabbit leaps, and secrets keeps.
Oracle's call, we heard it clear,
Extracted schemas far and near.
With hops so swift, in code we trust,
Connections made, as datas must.
🎉 Hooray for Azimutt's new feat! 🐇


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 9929ba9 and 1dd9026.

Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
Files selected for processing (21)
  • libs/connector-oracle/.gitignore (1 hunks)
  • libs/connector-oracle/.npmignore (1 hunks)
  • libs/connector-oracle/README.md (1 hunks)
  • libs/connector-oracle/jest.config.js (1 hunks)
  • libs/connector-oracle/package.json (1 hunks)
  • libs/connector-oracle/src/connect.test.ts (1 hunks)
  • libs/connector-oracle/src/connect.ts (1 hunks)
  • libs/connector-oracle/src/constants.test.ts (1 hunks)
  • libs/connector-oracle/src/helpers.test.ts (1 hunks)
  • libs/connector-oracle/src/helpers.ts (1 hunks)
  • libs/connector-oracle/src/history.test.ts (1 hunks)
  • libs/connector-oracle/src/history.ts (1 hunks)
  • libs/connector-oracle/src/index.ts (1 hunks)
  • libs/connector-oracle/src/integration.test.ts (1 hunks)
  • libs/connector-oracle/src/oracle.test.ts (1 hunks)
  • libs/connector-oracle/src/oracle.ts (1 hunks)
  • libs/connector-oracle/src/query.ts (1 hunks)
  • libs/connector-oracle/src/stats.test.ts (1 hunks)
  • libs/connector-oracle/src/stats.ts (1 hunks)
  • libs/connector-oracle/tsconfig.json (1 hunks)
  • libs/models/src/databaseUrl.ts (2 hunks)
Files skipped from review due to trivial changes (3)
  • libs/connector-oracle/.gitignore
  • libs/connector-oracle/.npmignore
  • libs/connector-oracle/jest.config.js
Additional context used
Biome
libs/connector-oracle/src/constants.test.ts

[error] 3-9: Do not export from a test file.

(lint/suspicious/noExportsInTest)


[error] 9-10: Do not export from a test file.

(lint/suspicious/noExportsInTest)

libs/connector-oracle/src/query.ts

[error] 43-45: This else clause can be omitted because previous branches break early.

Unsafe fix: Omit the else clause.

(lint/style/noUselessElse)


[error] 19-19: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

libs/connector-oracle/src/oracle.ts

[error] 89-89: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

LanguageTool
libs/connector-oracle/README.md

[grammar] ~3-~3: Did you mean “connecting”? Or maybe you should add a pronoun? In active voice, ‘allow’ + ‘to’ takes an object, usually a pronoun.
Context: ...# Oracle connector This library allows to connect to [Oracle](https://www.oracle.com/data...

(ALLOW_TO)


[uncategorized] ~5-~5: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...ables, columns, relations and types and format them in a JSON Schema. This library is...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[style] ~10-~10: The phrase ‘feel free to’ is used quite frequently. Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...the gateway server. **Feel free to use it and even submit PR to improve it...

(FEEL_FREE_TO_STYLE_ME)

Additional comments not posted (15)
libs/connector-oracle/src/helpers.ts (1)

3-6: Review of buildSqlTable function

The function correctly handles the optional schema presence by adding it to the SQL string if it exists. The use of template literals and string interpolation seems appropriate for this function's purpose.

libs/connector-oracle/src/helpers.test.ts (1)

4-15: Review of helper function tests

The tests are well-structured and seem to cover various scenarios for both buildSqlTable and buildSqlColumn. Consider adding negative test cases or cases with unusual input to ensure robustness.

libs/connector-oracle/src/history.test.ts (1)

24-32: Review of getQueryHistory test

The main test case is skipped, which might be intentional for CI environments where a local database setup isn't available. Consider adding a comment to clarify why the test is skipped and if there are conditions under which it should be run.

libs/connector-oracle/package.json (1)

20-39: Review of package configuration

The scripts and dependencies are appropriately configured. However, ensure that the build:docker script is compatible with all expected environments since it uses npx which assumes npm is available.

libs/connector-oracle/src/stats.test.ts (1)

7-37: Consider parameterizing test inputs and enabling tests.

The tests are currently skipped (test.skip). If these are meant to be integration tests, consider enabling them in a controlled environment or adding a mechanism to skip based on environment variables. Additionally, the database URL and schema details are hardcoded, which could be parameterized to enhance flexibility and reusability.

libs/connector-oracle/src/history.ts (1)

8-14: Type Definition for RawQuery Looks Good

The RawQuery type is well-defined and covers essential fields needed for representing query history.

libs/connector-oracle/src/index.ts (4)

23-39: getSchema Method Implementation is Robust

The method correctly constructs options and utilizes asynchronous processing effectively. Good use of runtime validation with zodParseAsync.


40-47: getQueryHistory Method Implementation is Effective

This method follows established patterns and correctly handles asynchronous operations and data validation.


48-57: execute Method Correctly Handles Query Execution

The method effectively executes SQL queries and ensures result validation, promoting robustness in data handling.


65-82: Statistics Methods are Well-Implemented

Both getEntityStats and getAttributeStats methods are implemented consistently with the rest of the module and correctly handle asynchronous operations and data validation.

libs/connector-oracle/src/connect.ts (1)

16-78: connect Function is Robust and Flexible

The connect function is well-implemented with comprehensive error handling and flexible type usage. Consider adding detailed documentation for the function to enhance clarity and maintainability.

libs/connector-oracle/src/stats.ts (2)

15-33: Statistics Functions for Tables and Columns are Effectively Implemented

The getTableStats and getColumnStats functions are well-crafted with efficient SQL usage and proper asynchronous handling. Good use of helper functions to modularize the code.


35-129: Helper Functions for Statistics are Well-Structured

The helper functions like countRows, getSampleValues, getColumnType, etc., are efficiently implemented. Consider adding comments to explain the logic behind complex SQL queries for better maintainability.

libs/models/src/databaseUrl.ts (1)

29-29: Oracle regex appears correctly structured.

Ensure to include unit tests that cover various Oracle URL formats to validate this regex fully.

Verification successful

Oracle regex appears correctly structured.

The parseDatabaseUrl function, which uses the oracleRegexxxxx, is thoroughly tested in databaseUrl.test.ts. This ensures that the regex is indirectly tested through various URL parsing scenarios.

  • libs/models/src/databaseUrl.test.ts: Multiple test cases for parseDatabaseUrl.
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify that appropriate unit tests exist for Oracle regex parsing.

# Test: Search for Oracle URL parsing tests. Expect: At least one test covering various URL formats.
rg --type typescript --files-with-matches 'oracleRegexxxxx' 'libs/models/src/*.test.ts'

Length of output: 122



Script:

#!/bin/bash
# Description: Verify that appropriate unit tests exist for Oracle regex parsing.

# Test: Search for Oracle URL parsing tests. Expect: At least one test covering various URL formats.
rg --type ts --files-with-matches 'oracleRegexxxxx' 'libs/models/src'

Length of output: 98



Script:

#!/bin/bash
# Description: Verify that appropriate unit tests exist for Oracle regex parsing by checking references to `databaseUrl.ts`.

# Test: Search for any test files that reference methods or functions within `databaseUrl.ts`.
rg --type ts --files-with-matches 'databaseUrl' 'libs/models/src'

Length of output: 248



Script:

#!/bin/bash
# Description: Verify that `databaseUrl.test.ts` contains tests for the Oracle regex.

# Test: Search for references to `oracleRegexxxxx` within `databaseUrl.test.ts`.
rg 'oracleRegexxxxx' 'libs/models/src/databaseUrl.test.ts'

Length of output: 56



Script:

#!/bin/bash
# Description: Identify functions or methods in `databaseUrl.ts` that use `oracleRegexxxxx`.

# Test: Search for `oracleRegexxxxx` within `databaseUrl.ts` to find its usage context.
rg 'oracleRegexxxxx' 'libs/models/src/databaseUrl.ts'

Length of output: 217



Script:

#!/bin/bash
# Description: Identify the function or method in `databaseUrl.ts` that uses `oracleRegexxxxx`.

# Test: Search for the context around `const oracleMatches = url.match(oracleRegexxxxx)` to identify the enclosing function or method.
rg -A 10 'const oracleMatches = url.match(oracleRegexxxxx)' 'libs/models/src/databaseUrl.ts'

Length of output: 92



Script:

#!/bin/bash
# Description: Identify the full context of the function or method in `databaseUrl.ts` that uses `oracleRegexxxxx`.

# Test: Search for the context around `const oracleMatches = url.match(oracleRegexxxxx)` with an extended range to identify the enclosing function or method.
rg -A 20 -B 10 'const oracleMatches = url.match(oracleRegexxxxx)' 'libs/models/src/databaseUrl.ts'

Length of output: 98



Script:

#!/bin/bash
# Description: List all functions or methods in `databaseUrl.ts`.

# Test: Search for all function and method definitions in `databaseUrl.ts`.
rg 'function |const |let ' 'libs/models/src/databaseUrl.ts'

Length of output: 4490



Script:

#!/bin/bash
# Description: Verify that `parseDatabaseUrl` function is tested in `databaseUrl.test.ts`.

# Test: Search for references to `parseDatabaseUrl` within `databaseUrl.test.ts`.
rg 'parseDatabaseUrl' 'libs/models/src/databaseUrl.test.ts'

Length of output: 2614

libs/connector-oracle/tsconfig.json (1)

1-105: TypeScript configuration is well-defined and robust.

This configuration sets a solid foundation for the project. Ensure that similar settings are used across other TypeScript projects within your organization to maintain consistency and reduce issues related to differing TypeScript behaviors.

Comment on lines +98 to +104
const oracleMatches = url.match(oracleRegexxxxx)
if (oracleMatches) {
const kind: DatabaseKind = 'oracle'
const [, user, pass, host, port, db, optionsStr] = oracleMatches
const options = optionsStr ? parseDatabaseOptions(optionsStr) : undefined
return removeUndefined({full: url, kind, user, pass, host, port: port ? parseInt(port) : undefined, db, options})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oracle URL parsing correctly integrated into parseDatabaseUrl.

The integration follows existing patterns for other databases, which maintains consistency. Consider refactoring parseDatabaseUrl to reduce its complexity and improve maintainability by splitting it into smaller functions, each handling a specific database type.

- const oracleMatches = url.match(oracleRegexxxxx)
+ const oracleMatches = parseOracleUrl(url) // This would be a new function encapsulating the Oracle URL parsing logic.

Committable suggestion was skipped due to low confidence.

Comment on lines +8 to +10
export function buildSqlColumn(path: AttributePath): SqlFragment {
const [head, ...tail] = path
return `"${head}"${tail.map((t) => `->'${t}'`).join("")}`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

Potential SQL Injection Risk in buildSqlColumn Function

The buildSqlColumn function constructs SQL column references, including nested JSON attributes, without apparent sanitization of the AttributePath values. This could lead to SQL injection vulnerabilities if the AttributePath includes user input.

  • File: libs/connector-oracle/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
    • Lines: const value = await (values.length > 0 ? Promise.resolve(values[0]) : getSampleValue(conn, sqlTable, buildSqlColumn([field.name])))
  • File: libs/connector-oracle/src/oracle.ts

    • Lines: const sqlColumn = buildSqlColumn(attribute)
  • File: libs/connector-sqlserver/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
  • File: libs/connector-sqlserver/src/sqlserver.ts

    • Lines: const sqlColumn = buildSqlColumn(attribute)
  • File: libs/connector-mysql/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
  • File: libs/connector-mysql/src/mysql.ts

    • Lines: const sqlColumn = buildSqlColumn(attribute)
  • File: libs/connector-snowflake/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
  • File: libs/connector-postgres/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
    • Lines: const value = await (values.length > 0 ? Promise.resolve(values[0]) : getSampleValue(conn, sqlTable, buildSqlColumn([field.name])))
  • File: libs/connector-postgres/src/postgres.ts

    • Lines: const sqlColumn = buildSqlColumn(attribute)
  • File: libs/connector-mariadb/src/stats.ts

    • Lines: const sqlColumn = buildSqlColumn(ref.attribute)
  • File: libs/connector-mariadb/src/mariadb.ts

    • Lines: const sqlColumn = buildSqlColumn(attribute)

Please ensure that AttributePath values are properly sanitized before being passed to the buildSqlColumn function to prevent SQL injection vulnerabilities.

Analysis chain

Review of buildSqlColumn function

This function handles the building of SQL column references, including nested JSON attributes using the -> operator. The spread operator and map are correctly used to handle multiple nested attributes. Ensure that all potential SQL injection vectors are considered, especially if AttributePath can come from user input.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify that AttributePath values are sanitized before being passed to buildSqlColumn.

# Test: Search for usages of buildSqlColumn and check for input sanitization.
rg --type typescript $'buildSqlColumn'

Length of output: 72



Script:

#!/bin/bash
# Description: Verify that AttributePath values are sanitized before being passed to buildSqlColumn.

# Test: Search for usages of buildSqlColumn and check for input sanitization.
rg 'buildSqlColumn' --glob '**/*.ts' --glob '**/*.tsx'

Length of output: 5785



Script:

#!/bin/bash
# Description: Extract lines around the calls to buildSqlColumn to check for input sanitization.

# Extract lines around the calls to buildSqlColumn in relevant files.
rg 'buildSqlColumn' --glob '**/*.ts' --glob '**/*.tsx' -A 3 -B 3

Length of output: 27106

Comment on lines +4 to +10
export const logger: Logger = {
debug: (text: string): void => console.debug(text),
log: (text: string): void => console.log(text),
warn: (text: string): void => console.warn(text),
error: (text: string): void => console.error(text)
}
export const application = 'azimutt-tests'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove exports from test file

Exporting from test files can lead to unintended dependencies and should be avoided. Consider using a common setup file for shared test utilities instead.

- export const logger = {
-     debug: (text: string): void => console.debug(text),
-     log: (text: string): void => console.log(text),
-     warn: (text: string): void => console.warn(text),
-     error: (text: string): void => console.error(text)
- }
- export const application = 'azimutt-tests'
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const logger: Logger = {
debug: (text: string): void => console.debug(text),
log: (text: string): void => console.log(text),
warn: (text: string): void => console.warn(text),
error: (text: string): void => console.error(text)
}
export const application = 'azimutt-tests'
const logger: Logger = {
debug: (text: string): void => console.debug(text),
log: (text: string): void => console.log(text),
warn: (text: string): void => console.warn(text),
error: (text: string): void => console.error(text)
}
const application = 'azimutt-tests'
Tools
Biome

[error] 9-10: Do not export from a test file.

(lint/suspicious/noExportsInTest)

Comment on lines +43 to +45
} else {
return newName
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove unnecessary 'else' clause.

The 'else' clause in the uniqueName function is unnecessary since the 'if' condition returns early. Removing it simplifies the code.

-  } else {
+  }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else {
return newName
}
}
Tools
Biome

[error] 43-45: This else clause can be omitted because previous branches break early.

Unsafe fix: Omit the else clause.

(lint/style/noUselessElse)

): Promise<QueryResults> {
const attributes = buildAttributes(result.fields)
const rows = result.rows.map((row) =>
attributes.reduce((acc, col, i) => ({ ...acc, [col.name]: row[i] }), {})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimize the use of the spread operator in reduce.

Using the spread operator in the reduce function can lead to quadratic time complexity. Consider using a different method to accumulate results.

- const rows = result.rows.map((row) =>
-   attributes.reduce((acc, col, i) => ({ ...acc, [col.name]: row[i] }), {})
+ const rows = result.rows.map((row) => {
+   const rowResult = {};
+   attributes.forEach((col, i) => {
+     rowResult[col.name] = row[i];
+   });
+   return rowResult;
+ })
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
attributes.reduce((acc, col, i) => ({ ...acc, [col.name]: row[i] }), {})
const rows = result.rows.map((row) => {
const rowResult = {};
attributes.forEach((col, i) => {
rowResult[col.name] = row[i];
});
return rowResult;
})
Tools
Biome

[error] 19-19: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

Comment on lines +7 to +29
// Use this test to troubleshoot database connection errors.
// If you don't succeed with the first one (Azimutt `connect`), try with the second one (raw node lib) and once you found a way, tell us how to fix ;)
// Of course, you can contact us (issues or contact@azimutt.app) to do it together.
// More documentation available at: https://azimutt.notion.site/Database-connection-troubleshooting-c4c19ed28c7040ef9aaaeec96ce6ba8d
describe("connect", () => {
// TODO 1: replace this with your own connection string, but don't commit it!
const url = "jdbc:oracle:thin:sys/oracle@//localhost:1521/FREE"

// TODO 2: write a valid query for your database
const query = "SELECT * FROM C##AZIMUTT.USERS"
const parameters: any[] = []

// TODO 3: unskip this test first and run it (`npm run test -- src/connect.test.ts`)
test.skip("Azimutt should connect", async () => {
const parsedUrl = parseDatabaseUrl(url)
const results = await connect(
application,
parsedUrl,
execQuery(query, parameters),
{ logger, logQueries: true }
)
console.log("results", results)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

Clarify TODO comments and consider enabling the test.

The test in libs/connector-oracle/src/connect.test.ts is skipped and contains several TODO comments, which are helpful but could be more specific in guiding the user on what needs to be done. Consider providing more detailed instructions or examples for the following TODO comments:

  • Replace the connection string with a valid one but ensure it is not committed.
  • Write a valid query for your database.
  • Unskip the test and run it using the provided command.

Additionally, evaluate whether this test can be enabled by default with environmental checks.

  • File: libs/connector-oracle/src/connect.test.ts
  • Lines: 7-29
Analysis chain

Clarify TODO comments and consider enabling the test.

The test is skipped and contains several TODO comments. These are helpful but could be more specific in guiding the user on what needs to be done. Consider providing more detailed instructions or examples. Also, evaluate whether this test can be enabled by default with environmental checks.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Check for guidance on enabling tests and handling TODOs.

# Test: Search for TODO comments and their handling in the codebase.
rg "TODO" --type typescript

Length of output: 62



Script:

#!/bin/bash
# Description: Check for guidance on enabling tests and handling TODOs.

# Test: Search for TODO comments and their handling in TypeScript files.
rg "TODO" --glob "*.ts"

Length of output: 15781

Comment on lines +16 to +50
export const getQueryHistory =
(opts: ConnectorQueryHistoryOpts) =>
async (conn: Conn): Promise<DatabaseQuery[]> => {
return conn
.query(
`
SELECT
h.sql_id,
i.instance_name AS DATABASE_NAME,
s.rows_processed,
u.username
FROM
dba_hist_sqlstat h
JOIN
v$sql s ON s.sql_id = h.sql_id
JOIN
dba_hist_active_sess_history a ON h.sql_id = a.sql_id
JOIN
dba_users u ON a.user_id = u.user_id
JOIN
gv$instance i ON h.instance_number = i.instance_number
ORDER BY
h.sql_id`,
[],
"getQueryHistory"
)
.then((queries) => {
return queries.reduce<RawQuery[]>((acc, row) => {
const [id, databaseName, rows, user] = row as any[]
acc.push({ id, query: "", database: databaseName, rows, user })
return acc
}, [])
})
.catch(handleError(`Failed to get historical queries`, [], opts))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review and Suggestion for getQueryHistory Function

The function is well-implemented with robust error handling. However, the query field in RawQuery is always set to an empty string. If this is intentional, consider documenting why. If not, it might be an oversight.

Consider adding the actual SQL query text if applicable:

acc.push({ id, query: row.queryText, database: databaseName, rows, user })  // Assuming 'queryText' is the correct field from the SQL result

Comment on lines +58 to +64
analyze: (
application: string,
url: DatabaseUrlParsed,
query: string,
parameters: any[],
opts: ConnectorDefaultOpts
): Promise<QueryAnalyze> => Promise.reject("Not implemented"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

analyze Method Needs Attention

The method is not implemented and returns a rejection. Ensure to implement or remove this method before moving to production.

Comment on lines +20 to +91
describe("oracle", () => {
// local url, install db or replace it to test
const url: DatabaseUrlParsed = parseDatabaseUrl(
"jdbc:oracle:thin:sys/oracle@//localhost:1521/FREE"
)

const opts: ConnectorSchemaOpts = {
logger,
logQueries: false,
inferJsonAttributes: true,
inferPolymorphicRelations: true,
}

test.skip("execQuery", async () => {
const query =
"SELECT u.id, e.id, o.id FROM users u JOIN events e ON u.id = e.created_by JOIN organizations o on o.id = e.organization_id FETCH FIRST 10 ROWS ONLY"
const results = await connect(application, url, execQuery(query, []), opts)
expect(results.attributes).toEqual([
{
name: "id",
ref: { schema: "public", entity: "users", attribute: ["id"] },
},
{
name: "id_2",
ref: { schema: "public", entity: "events", attribute: ["id"] },
},
{
name: "id_3",
ref: { schema: "public", entity: "organizations", attribute: ["id"] },
},
])
})
test.skip("getSchema", async () => {
const schema = await connect(application, url, getSchema(opts), opts)
console.log("schema", schema)
// console.log('schema', schema.entities?.find(e => e.name == 'events')?.attrs?.find(a => a.name == 'name')?.stats)
expect(schema.entities?.length).toEqual(14)
})
test.skip("getBlockSize", async () => {
const blockSize = await connect(application, url, getBlockSize(opts), opts)
console.log(`blockSize`, blockSize)
})
test.skip("getDatabase", async () => {
const database = await connect(application, url, getDatabase(opts), opts)
console.log(`database`, database)
})
test.skip("getTables", async () => {
const tables = await connect(application, url, getTables(opts), opts)
console.log(`${tables.length} tables`, tables)
})
test.skip("getColumns", async () => {
const columns = await connect(application, url, getColumns(opts), opts)
console.log(`${columns.length} columns`, columns)
})
test.skip("getTypes", async () => {
const types = await connect(application, url, getTypes(opts), opts)
console.log(`${types.length} types`, types)
})
test.skip("getDistinctValues", async () => {
const values = await connect(
application,
url,
getDistinctValues(
{ schema: "C##AZIMUTT", entity: "USERS" },
["NAME"],
opts
),
opts
)
console.log(`${values.length} values`, values)
})
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enable and Expand Test Coverage

The test cases are comprehensive but are currently skipped. Consider enabling these tests and ensuring they cover all edge cases and failure scenarios for robust testing.

Comment on lines +40 to +129
export const getSchema =
(opts: ConnectorSchemaOpts) =>
async (conn: Conn): Promise<Database> => {
const start = Date.now()
const scope = formatConnectorScope(
{ schema: "schema", entity: "table" },
opts
)
opts.logger.log(
`Connected to the database${scope ? `, exporting for ${scope}` : ""} ...`
)

// access system tables only
const blockSize: number = await getBlockSize(opts)(conn)
const database: RawDatabase = await getDatabase(opts)(conn)
const tables: RawTable[] = await getTables(opts)(conn)
opts.logger.log(`Found ${pluralizeL(tables, "table")} ...`)
const columns: RawColumn[] = await getColumns(opts)(conn)
opts.logger.log(`Found ${pluralizeL(columns, "column")} ...`)
const constraints: RawConstraint[] = await getConstraints(opts)(conn)
opts.logger.log(`Found ${pluralizeL(constraints, "constraint")} ...`)
const indexes: RawIndex[] = await getIndexes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(indexes, "index")} ...`)
const relations: RawRelation[] = await getRelations(opts)(conn)
opts.logger.log(`Found ${pluralizeL(relations, "relation")} ...`)
const types: RawType[] = await getTypes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(types, "type")} ...`)

// access table data when options are requested
const columnsByTable = groupByEntity(columns)
const jsonColumns: Record<
EntityId,
Record<AttributeName, ValueSchema>
> = opts.inferJsonAttributes
? await getJsonColumns(columnsByTable, opts)(conn)
: {}
const polyColumns: Record<
EntityId,
Record<AttributeName, string[]>
> = opts.inferPolymorphicRelations
? await getPolyColumns(columnsByTable, opts)(conn)
: {}
// TODO: pii, join relations...

// build the database
const columnsByIndex: Record<EntityId, { [i: number]: string }> = mapValues(
columnsByTable,
(cols) =>
cols.reduce(
(acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
{}
)
)
const constraintsByTable = groupByEntity(constraints)
const indexesByTable = groupByEntity(indexes)
opts.logger.log(
`✔︎ Exported ${pluralizeL(tables, "table")}, ${pluralizeL(relations, "relation")} and ${pluralizeL(types, "type")} from the database!`
)
return removeUndefined({
entities: tables
.map((table) => [toEntityId(table), table] as const)
.map(([id, table]) =>
buildEntity(
blockSize,
table,
columnsByTable[id] || [],
columnsByIndex[id] || {},
constraintsByTable[id] || [],
indexesByTable[id] || [],
jsonColumns[id] || {},
polyColumns[id] || {}
)
),
relations: relations
.map((r) => buildRelation(r, columnsByIndex))
.filter((rel): rel is Relation => !!rel),
types: types.map(buildType),
doc: undefined,
stats: removeUndefined({
name: conn.url.db || database.database,
kind: DatabaseKind.Enum.postgres,
version: database.version,
doc: undefined,
extractedAt: new Date().toISOString(),
extractionDuration: Date.now() - start,
size: database.blks_read * blockSize,
}),
extra: undefined,
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimize the use of spread syntax in accumulators.

The use of spread syntax in the reduce function can lead to performance issues due to its O(n^2) complexity. Consider using other methods like .push.

-          (acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
+          (acc, col) => { acc[col.column_index] = col.column_name; return acc; },

Overall, the function is well-structured and aligns with the rest of the codebase.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const getSchema =
(opts: ConnectorSchemaOpts) =>
async (conn: Conn): Promise<Database> => {
const start = Date.now()
const scope = formatConnectorScope(
{ schema: "schema", entity: "table" },
opts
)
opts.logger.log(
`Connected to the database${scope ? `, exporting for ${scope}` : ""} ...`
)
// access system tables only
const blockSize: number = await getBlockSize(opts)(conn)
const database: RawDatabase = await getDatabase(opts)(conn)
const tables: RawTable[] = await getTables(opts)(conn)
opts.logger.log(`Found ${pluralizeL(tables, "table")} ...`)
const columns: RawColumn[] = await getColumns(opts)(conn)
opts.logger.log(`Found ${pluralizeL(columns, "column")} ...`)
const constraints: RawConstraint[] = await getConstraints(opts)(conn)
opts.logger.log(`Found ${pluralizeL(constraints, "constraint")} ...`)
const indexes: RawIndex[] = await getIndexes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(indexes, "index")} ...`)
const relations: RawRelation[] = await getRelations(opts)(conn)
opts.logger.log(`Found ${pluralizeL(relations, "relation")} ...`)
const types: RawType[] = await getTypes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(types, "type")} ...`)
// access table data when options are requested
const columnsByTable = groupByEntity(columns)
const jsonColumns: Record<
EntityId,
Record<AttributeName, ValueSchema>
> = opts.inferJsonAttributes
? await getJsonColumns(columnsByTable, opts)(conn)
: {}
const polyColumns: Record<
EntityId,
Record<AttributeName, string[]>
> = opts.inferPolymorphicRelations
? await getPolyColumns(columnsByTable, opts)(conn)
: {}
// TODO: pii, join relations...
// build the database
const columnsByIndex: Record<EntityId, { [i: number]: string }> = mapValues(
columnsByTable,
(cols) =>
cols.reduce(
(acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
{}
)
)
const constraintsByTable = groupByEntity(constraints)
const indexesByTable = groupByEntity(indexes)
opts.logger.log(
`✔︎ Exported ${pluralizeL(tables, "table")}, ${pluralizeL(relations, "relation")} and ${pluralizeL(types, "type")} from the database!`
)
return removeUndefined({
entities: tables
.map((table) => [toEntityId(table), table] as const)
.map(([id, table]) =>
buildEntity(
blockSize,
table,
columnsByTable[id] || [],
columnsByIndex[id] || {},
constraintsByTable[id] || [],
indexesByTable[id] || [],
jsonColumns[id] || {},
polyColumns[id] || {}
)
),
relations: relations
.map((r) => buildRelation(r, columnsByIndex))
.filter((rel): rel is Relation => !!rel),
types: types.map(buildType),
doc: undefined,
stats: removeUndefined({
name: conn.url.db || database.database,
kind: DatabaseKind.Enum.postgres,
version: database.version,
doc: undefined,
extractedAt: new Date().toISOString(),
extractionDuration: Date.now() - start,
size: database.blks_read * blockSize,
}),
extra: undefined,
})
}
export const getSchema =
(opts: ConnectorSchemaOpts) =>
async (conn: Conn): Promise<Database> => {
const start = Date.now()
const scope = formatConnectorScope(
{ schema: "schema", entity: "table" },
opts
)
opts.logger.log(
`Connected to the database${scope ? `, exporting for ${scope}` : ""} ...`
)
// access system tables only
const blockSize: number = await getBlockSize(opts)(conn)
const database: RawDatabase = await getDatabase(opts)(conn)
const tables: RawTable[] = await getTables(opts)(conn)
opts.logger.log(`Found ${pluralizeL(tables, "table")} ...`)
const columns: RawColumn[] = await getColumns(opts)(conn)
opts.logger.log(`Found ${pluralizeL(columns, "column")} ...`)
const constraints: RawConstraint[] = await getConstraints(opts)(conn)
opts.logger.log(`Found ${pluralizeL(constraints, "constraint")} ...`)
const indexes: RawIndex[] = await getIndexes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(indexes, "index")} ...`)
const relations: RawRelation[] = await getRelations(opts)(conn)
opts.logger.log(`Found ${pluralizeL(relations, "relation")} ...`)
const types: RawType[] = await getTypes(opts)(conn)
opts.logger.log(`Found ${pluralizeL(types, "type")} ...`)
// access table data when options are requested
const columnsByTable = groupByEntity(columns)
const jsonColumns: Record<
EntityId,
Record<AttributeName, ValueSchema>
> = opts.inferJsonAttributes
? await getJsonColumns(columnsByTable, opts)(conn)
: {}
const polyColumns: Record<
EntityId,
Record<AttributeName, string[]>
> = opts.inferPolymorphicRelations
? await getPolyColumns(columnsByTable, opts)(conn)
: {}
// TODO: pii, join relations...
// build the database
const columnsByIndex: Record<EntityId, { [i: number]: string }> = mapValues(
columnsByTable,
(cols) =>
cols.reduce(
(acc, col) => { acc[col.column_index] = col.column_name; return acc; },
{}
)
)
const constraintsByTable = groupByEntity(constraints)
const indexesByTable = groupByEntity(indexes)
opts.logger.log(
`✔︎ Exported ${pluralizeL(tables, "table")}, ${pluralizeL(relations, "relation")} and ${pluralizeL(types, "type")} from the database!`
)
return removeUndefined({
entities: tables
.map((table) => [toEntityId(table), table] as const)
.map(([id, table]) =>
buildEntity(
blockSize,
table,
columnsByTable[id] || [],
columnsByIndex[id] || {},
constraintsByTable[id] || [],
indexesByTable[id] || [],
jsonColumns[id] || {},
polyColumns[id] || {}
)
),
relations: relations
.map((r) => buildRelation(r, columnsByIndex))
.filter((rel): rel is Relation => !!rel),
types: types.map(buildType),
doc: undefined,
stats: removeUndefined({
name: conn.url.db || database.database,
kind: DatabaseKind.Enum.postgres,
version: database.version,
doc: undefined,
extractedAt: new Date().toISOString(),
extractionDuration: Date.now() - start,
size: database.blks_read * blockSize,
}),
extra: undefined,
})
}
Tools
Biome

[error] 89-89: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

@tonylyjones tonylyjones marked this pull request as draft June 28, 2024 09:13
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 1dd9026 and e5cd9b0.

Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
Files selected for processing (2)
  • gateway/package.json (1 hunks)
  • gateway/src/services/connector.ts (2 hunks)
Additional comments not posted (3)
gateway/src/services/connector.ts (2)

10-10: Correct import statement for Oracle connector.

The import statement is consistent with the other connector imports and correctly references the new Oracle connector package.


21-21: Proper integration of Oracle connector into connectors object.

The addition of the Oracle connector to the connectors object follows the established pattern and is correctly initialized.

gateway/package.json (1)

48-48: Correct addition of Oracle connector as a dependency.

The use of "workspace:^" for the Oracle connector dependency is appropriate for a monorepo setup, ensuring that the package uses the version that is currently in the workspace.

@loicknuchel loicknuchel marked this pull request as ready for review June 28, 2024 19:39
@loicknuchel loicknuchel merged commit a513675 into main Jun 28, 2024
4 of 5 checks passed
@loicknuchel loicknuchel deleted the feature/add-oracle-connector branch June 28, 2024 19:41
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between e5cd9b0 and 7b501e3.

Files selected for processing (3)
  • libs/connector-oracle/README.md (1 hunks)
  • libs/connector-oracle/src/connect.ts (1 hunks)
  • libs/connector-oracle/src/oracle.ts (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • libs/connector-oracle/src/connect.ts
Additional context used
LanguageTool
libs/connector-oracle/README.md

[grammar] ~3-~3: Did you mean “connecting”? Or maybe you should add a pronoun? In active voice, ‘allow’ + ‘to’ takes an object, usually a pronoun.
Context: ...# Oracle connector This library allows to connect to [Oracle](https://www.oracle.com/data...

(ALLOW_TO)


[style] ~10-~10: The phrase ‘feel free to’ is used quite frequently. Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...the gateway server. **Feel free to use it and even submit PR to improve it...

(FEEL_FREE_TO_STYLE_ME)

Biome
libs/connector-oracle/src/oracle.ts

[error] 89-89: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

@@ -0,0 +1,75 @@
# Oracle connector

This library allows to connect to [Oracle](https://www.oracle.com/database), extract its schema and more...
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar Improvement: Adjust phrasing for clarity.

Consider revising "This library allows to connect to Oracle" to "This library allows you to connect to Oracle" for grammatical correctness.

- This library allows to connect to [Oracle](https://www.oracle.com/database)
+ This library allows you to connect to [Oracle](https://www.oracle.com/database)
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
This library allows to connect to [Oracle](https://www.oracle.com/database), extract its schema and more...
This library allows you to connect to [Oracle](https://www.oracle.com/database), extract its schema and more...
Tools
LanguageTool

[grammar] ~3-~3: Did you mean “connecting”? Or maybe you should add a pronoun? In active voice, ‘allow’ + ‘to’ takes an object, usually a pronoun.
Context: ...# Oracle connector This library allows to connect to [Oracle](https://www.oracle.com/data...

(ALLOW_TO)

This library is made by [Azimutt](https://azimutt.app) to allow people to explore their Oracle database.
It's accessible through the [Desktop app](../../desktop) (soon), the [CLI](https://www.npmjs.com/package/azimutt) or even the website using the [gateway](../../gateway) server.

**Feel free to use it and even submit PR to improve it:**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style Improvement: Consider alternative phrasing.

The phrase "Feel free to use it" could be replaced with a more professional alternative like "You are encouraged to use it."

- **Feel free to use it and even submit PR to improve it:**
+ **You are encouraged to use it and even submit PRs to improve it:**
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Feel free to use it and even submit PR to improve it:**
**You are encouraged to use it and even submit PRs to improve it:**
Tools
LanguageTool

[style] ~10-~10: The phrase ‘feel free to’ is used quite frequently. Consider using a less frequent alternative to set your writing apart from others and make it sound more professional.
Context: ...the gateway server. **Feel free to use it and even submit PR to improve it...

(FEEL_FREE_TO_STYLE_ME)


This library allows to connect to [Oracle](https://www.oracle.com/database), extract its schema and more...

It lists all schemas, tables, columns, relations and types and format them in a JSON Schema.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar Improvement: Verb agreement.

The phrase "and format them in a JSON Schema" should be adjusted for proper verb agreement, perhaps to "and formats them into a JSON Schema."

- It lists all schemas, tables, columns, relations and types and format them in a JSON Schema.
+ It lists all schemas, tables, columns, relations, and types and formats them into a JSON Schema.
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
It lists all schemas, tables, columns, relations and types and format them in a JSON Schema.
It lists all schemas, tables, columns, relations, and types and formats them into a JSON Schema.

columnsByTable,
(cols) =>
cols.reduce(
(acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimize the use of spread syntax in accumulators.

The use of spread syntax in the reduce function can lead to performance issues due to its O(n^2) complexity. Consider using other methods like .push.

-          (acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
+          (acc, col) => { acc[col.column_index] = col.column_name; return acc; },
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
(acc, col) => ({ ...acc, [col.column_index]: col.column_name }),
(acc, col) => { acc[col.column_index] = col.column_name; return acc; },
Tools
Biome

[error] 89-89: Avoid the use of spread (...) syntax on accumulators.

Spread syntax should be avoided on accumulators (like those in .reduce) because it causes a time complexity of O(n^2).
Consider methods such as .splice or .push instead.

(lint/performance/noAccumulatingSpread)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants