Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compiled dbt test output could be more helpful #517

Closed
drewbanin opened this issue Aug 22, 2017 · 6 comments · Fixed by #3316
Closed

compiled dbt test output could be more helpful #517

drewbanin opened this issue Aug 22, 2017 · 6 comments · Fixed by #3316
Labels
enhancement New feature or request

Comments

@drewbanin
Copy link
Contributor

Consider compiling a simple select statement (w/ a limit) that can be directly run to find test failures. Right now, you need to change a select count(*) to select * and sometimes remove groups, add limits, etc.

@drewbanin drewbanin added the enhancement New feature or request label Aug 22, 2017
@mikekaminsky
Copy link
Contributor

Not sure if this belongs here or in a separate issue, but it'd be great if the test output printed out in the order of the DAG (and grouped by the model they reference). So the tests for the base models should be at the top, and those for the most downstream (and more abstract) models should be at the bottom.

@sagarvxyz
Copy link

sagarvxyz commented Jul 16, 2019

might not be applicable for all use cases, but it could be useful to also output a dbt output (with appropriate "refs") into a "Test Results" folder. I can see this being helpful if a user would want to:

  • build a report in a BI tool that outputs error test results
  • include all ids from a test error into an "exclusion" query or some other exclusion logic. Then a production workflow could be something like dbt run --models base, dbt test --models base, dbt run --models mart.test_exclusions, etc.

@joshtemple
Copy link
Contributor

Some great thoughts on this Discourse post that I wanted to expand on.

To me, the fundamental issue is that the query used to test is often different than the query needed to debug (I think "debug" is a more accurate term for this than "audit").

This means the dbt user has to take a lot of steps to get to the bottom of a test failure:

  1. Navigate to the compiled test SQL (can be sped up with +knowledge of command line operations, like piping to clipboard or command line SQL client)
  2. Copy it into a SQL client of some kind
  3. Modify the SQL to a debug query to return the actual primary keys or rows. For some tests this is trivial, for others, it can be time-consuming.
  4. Run the debug query and examine the results

This is especially problematic when debugging a CI failure, where the compiled test queries aren't directly available to the user unless they have saved them as artifacts of the build.

It would be great if dbt did all of this and presented the test failures clearly!

For example:

  • For a uniqueness test failure, I want to see the count of rows with duplicated values for the column and potentially the count of duplications for each primary key
  • For an accepted values test failure, I want to see which unacceptable values were present and the count of rows where they occurred
  • For a not null test failure, I want to see the count of rows in my table that are null for the tested column.

@drewbanin, you suggested implementing a separate debug query from the test query, either as a CTE or as a comment. I think this is a great idea. However, this only eliminates step 3 above. Why not set this up so dbt runs the debug query on test failure, returns the first 2-3 results to the command line, and logs the full result to an audit table (re: #903)?

Something like this?

dbt test --models orders

Failure in test unique_orders_order_id (models/orders/schema.yml)

  Got 2 results, expected 0.
 
  order_id           count  
  ------------------------
  1745291294829100   2      
  1384819819499118   10     
  
  compiled test SQL at target/compiled/.../unique_orders_order_id.sql
  compiled debug SQL at target/compiled/.../unique_orders_order_id.sql

Complete test results logged to table dbt.test_failures

@foundinblank
Copy link

I second @joshtemple's idea of having a debug query that gets kicked off when a test fails. It would be especially useful with time-based data tests. I have several tests that only look at the last 24 hours' data, so it's a pain to convert the test SQL into debug SQL AND fix the dates so they cover the same 24-hour period as the failed test (which might've been a few days ago). A debug query kicking off right after a failed test would be able to look at the exact same time period and retain the results.

@drewbanin
Copy link
Contributor Author

@joshtemple I'm relatively happy to have dbt run a query here and print the results to stdout, though I do think it could get kind of messy for some types of (mostly custom) tests. The example shown here (uniqueness) makes a ton of sense, but it's less clear to me what the output should be for not_null tests! Maybe that particular test is the odd one out, I just want to make the claim that it won't always be obvious what the "results" of a failed test should be.

I still love the idea of persisting test failures in tables, but I'm unsure about the mechanisms that dbt should employ to persist & manage these tests. Databases like BQ support table expiration, but on other databases, dbt will need to clean up these test tables in order to not make a really big mess of things.

Some very practical (and tractable) questions for us to consider:

  • dbt's tests have auto-generated names that probably won't be useful as table names in a database. How should we name these tables?
  • should these test tables "expire"?
    • dbt can create the test tables all in a specific schema (like {{ model.schema }}__tests) -- should it drop that schema before invoking tests?
    • should we stick a timestamp in the schema names, then delete the schemas after some expiration?
  • how should users configure which test failures get persisted in tables? Should we add a config to schema.yml? How do we make this environment aware (do it in prod but not in dev, or similar)?

Curious about your collective thoughts on these questions, or any other ideas/questions that are conjured up as you think about this one!

@norton120
Copy link

So I started in on a branch to do a very basic first pass at this, the idea being to remove the count wrapper and dump the test results into the node agate_table at test run, and then print the first 3-5 rows in the failure summary. To make this work for schema tests I was planning on expanding the wrapped_sql to get a full count * on the test query (so the status still displays the total number of fails), then getting a few full records to union in so we don't end up ETL'ing the whole warehouse during test runs... But now I'm scratching my head on how to cleanly do an ANSI-safe limit that will play nicely with all the adapters. I really want to keep this as part of the canonical test run output if possible... Just not 100% clear on how yet.

On the subject of materialising test results in a folder or table, can I suggest that this is probably better served with a set of audit models? it seems like a good philosophical line to draw that the sole responsibility of tests is to support the development and deployment cycle, and anything that creates consumable data (especially audit data) belongs in a model where it can be correctly governed with post hooks, docs, positioning in the DAG etc etc. It is generally easy to write a macro that gets called in both the model and the test to keep things dry, but we don't want to slow down testing with data governance concerns, or compromise data governance for testing. In the sample failing rows branch I have them printing directly to stdout with a debug log line "sample data displayed in console" so we aren't dumping actual records into log files, slippery slope and whatnot.
We did this log layer a while back https://github.com/norton120/kimball_dbt
And are doing a fresh implementation this winter at my new gig, which we can try to release as a package. That may help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants