Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to have test pass if compile error raised #3060

Closed
MarkMacArdle opened this issue Feb 9, 2021 · 4 comments
Closed

Way to have test pass if compile error raised #3060

MarkMacArdle opened this issue Feb 9, 2021 · 4 comments
Labels
dbt tests Issues related to built-in dbt testing functionality enhancement New feature or request stale Issues that have gone stale

Comments

@MarkMacArdle
Copy link

Describe the feature

A method of specifying that a test should pass if a compile error is raised.

I wanted this when working on modifying the equality test in dbt-utils. There was two places I thought it'd be useful:

Validating inputs

The current equality test will raise a compile error to validate inputs but there isn't a way of making an integration test that confirms the compile error is raised when expected.

Tests involving metadata

The modification I was working on was to check if two tables had the same columns. I added optional arguments for checking column order, capitalisation of names and data types (PR here). The information schema is used to pull the column metadata and if the specified parts don't match compile errors are raised. With four optional arguments and many possible combinations I would have loved a way to write integrations tests to check the compile errors where being raised when they should have been.

Proposal

I like the discussion of having a wrapper test that would consume another test in #2982 so I could have something like:

models:
  - name: model_name
    tests:
      - test_fail:
          - expect: compile_error
          - test_macro: dbt_utils.equality:
              compare_model: ref('other_table_name')
              column_metadata_tests:
                - all_columns_present_in_both_tables
                - case_sensitive_names
                - matching_order
                - matching_data_type

Some time could be saved by not actually running the query as once it compiles successfully the test can be passed.

Describe alternatives you've considered

Using a bash/python script to expect getting an error as suggested by @jtcohen6 here. I think you could get something working that way but wouldn't be as nice defining everything in normal tests. You'd need some way of differentiating between tests that should be run by the expecting-failure-script and those that can be run normally. Could maybe use tags for that.

Additional context

Somewhat related to #2982 which requests a way to specify a test should fail. Differs in that that issue would react to a returned result whereas this one would need to react to a compile error.

Who will this benefit?

Anyone using compile errors

Are you interested in contributing this feature?

Yes. Don't know where to start or if this should be a feature of an external library.

@jtcohen6
Copy link
Contributor

@MarkMacArdle This is such a neat line of thinking. Thanks for opening it as a separate issue.

Within dbt integration tests, we use unittest.assertRaises to accomplish exactly the thing you're describing. For instance:
https://github.com/fishtown-analytics/dbt/blob/6c6649f9129d5d108aa3b0526f634cd8f3a9d1ed/test/integration/013_context_var_tests/test_context_vars.py#L217-L220

Of course, it's possible to do the same via a bash script that executes an exception-raising test, then inspects the exit code and stdout from the previous command. The current integration testing suite is a little more elegant in that it enables, to a very basic degree, hooking into dbt's main point of entry as a python module (despite the very true fact that dbt does not have a documented or stable python API). We're actually thinking about packaging and releasing the DBTIntegrationTest base class as a module, for general-purpose use by adapter plugin maintainers: dbt-labs/dbt-adapter-tests#13

Your issue encourages a few more imaginative leaps:

  • Should we enable all project and package authors to write pythonic integration tests for their most complex Jinja macros? This would be in addition to (or instead of) the current seed + model + equality mechanism that we use for package integration tests (e.g. in dbt-utils), and which some folks use for unit testing (discourse). Should we eventually incorporate dbt-onic integration testing capabilities as a module within (or distributed with) dbt itself?
  • Should we add a feature specific to dbt tests, so as to hook a user-supplied expect: compile_error into exception handling (in python)?
  • Should we try to add generic unittest-like capabilities within Jinja, so that it were possible to (e.g.) catch and check exceptions right within a template?

I think the latter might be a bigger lift than it's worth. Right now, I think about different personas doing dbt work, and the required skills / languages for each:

  1. dbt project contributor: must know SQL, some dbt-specific Jinja helps
  2. dbt package contributor: intermediate or advanced Jinja (dbt-specific + vanilla)
  3. dbt (core or plugin) contributor: SQL + Jinja + python + pytest-style frameworks

I don't know exactly how you'd slot yourself, but at the point of wanting to (a) raise custom compilation exceptions and (b) unit- or integration-test those custom exceptions, I imagine that writing a little bit of python may be necessary.

I figure you

(@kwigley No action needed here, I think you might find this interesting)

@jtcohen6 jtcohen6 added discussion dbt tests Issues related to built-in dbt testing functionality and removed triage labels Feb 10, 2021
@MarkMacArdle
Copy link
Author

Should we try to add generic unittest-like capabilities within Jinja, so that it were possible to (e.g.) catch and check exceptions right within a template?

When I first read about dbt and its testing capabilities this is actually what I presumed would be possible.

Should we add a feature specific to dbt tests, so as to hook a user-supplied expect: compile_error into exception handling (in python)?

This would be fine either and allow getting to the same end. Sounds like it might be easier to implement.

I think it's important to be able to keep all tests in the yml files. As well as allowing keeping everything in one place it avoids users having to learn a new syntax if they want to add a test that expects an error.

My own experience is mostly SQL and python (although rarely using classes), jinja since I started using dbt and some unit tests. I'm going to try look into the add hook for expect: compile_error approach. In a busy period at work so will likely be a couple weeks before I do much on this.

I haven't looked at how dbt actually runs the tests before so if there's a good place to start for that please shout.

@github-actions
Copy link
Contributor

This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days.

@github-actions github-actions bot added the stale Issues that have gone stale label Oct 17, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Nov 4, 2022

Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest; add a comment to notify the maintainers.

@github-actions github-actions bot closed this as completed Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dbt tests Issues related to built-in dbt testing functionality enhancement New feature or request stale Issues that have gone stale
Projects
None yet
Development

No branches or pull requests

2 participants