Skip to content

Commit

Permalink
Merge pull request #146 from fivetran/bugfix/duplicate_sla_event_ids_…
Browse files Browse the repository at this point in the history
…from_dst

debug duplicate sla event ids
  • Loading branch information
fivetran-reneeli authored May 1, 2024
2 parents 987257a + ef63c08 commit c71cbe7
Show file tree
Hide file tree
Showing 13 changed files with 121 additions and 31 deletions.
16 changes: 16 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
# dbt_zendesk v0.15.0

## 🚨 Minor Upgrade 🚨
Although this update is not a breaking change, it will significantly impact the output of the `zendesk__sla_policies` model. [PR #146](https://github.com/fivetran/dbt_zendesk/pull/146) includes the following changes:

## Bug Fixes
- Fixes the issue of potential duplicate `sla_event_id`'s occurring in the `zendesk__sla_policies` model.
- This involved updating the `int_zendesk__schedule_spine` which was previously outputting overlapping schedule windows, to account for when holidays transcended a given schedule week.
- This also involved updating the `int_zendesk__reply_time_business_hours` model, in which two different versions of a schedule could exist due to daylight savings time.
- Improved performance by adjusting the `int_zendesk__reply_time_business_hours` model to only perform the weeks cartesian join on tickets that require the further look into the future.
- Previously the `int_zendesk__reply_time_business_hours` would perform a cartesian join on all tickets to calculate weeks into the future. This was required to accurately calculate `sla_elapsed_time` for tickets with first replies far into the future. However, this was only necessary for a handful of tickets. Therefore, this has been adjusted to accurately only calculate the future weeks as far as either the first reply time or first solved time.

## Documentation Updates
- Addition of the reference to the Fivetran prebuilt [Zendesk Streamlit report](https://fivetran-zendesk.streamlit.app/) in the README.
- Updates DECISIONLOG to include a note that the generated time series for ticket SLA policies is limited to a year into the future to maintain performance.

# dbt_zendesk v0.14.0

[PR #136](https://github.com/fivetran/dbt_zendesk/pull/136) includes the following changes:
Expand Down
3 changes: 3 additions & 0 deletions DECISIONLOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Decision Log

## Tracking Ticket SLA Policies Into the Future
In our models we generate a future time series for ticket SLA policies. This is limited to a year to maintain performance.

## No Historical Schedule Reference
At the current moment the Fivetran Zendesk Support connector does not contain historical data of schedules. This means if a schedule is created in the Zendesk Support UI and remains untouched for years, but then is adjusted in the current month you will see the data synced in the raw `schedule` table to reflect the current adjusted schedule. As a result the raw data will lose all historical reference of what this schedule range was previously.

Expand Down
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ The following table provides a detailed list of final models materialized within
| [zendesk__ticket_backlog](https://fivetran.github.io/dbt_zendesk/#!/model/model.zendesk.zendesk__ticket_backlog) | A daily historical view of the ticket field values defined in the `ticket_field_history_columns` variable for all backlog tickets. Backlog tickets being defined as any ticket not in a 'closed', 'deleted', or 'solved' status. |
| [zendesk__ticket_field_history](https://fivetran.github.io/dbt_zendesk/#!/model/model.zendesk.zendesk__ticket_field_history) | A daily historical view of the ticket field values defined in the `ticket_field_history_columns` variable and the corresponding updater fields defined in the `ticket_field_history_updater_columns` variable. |
| [zendesk__sla_policies](https://fivetran.github.io/dbt_zendesk/#!/model/model.zendesk.zendesk__sla_policies) | Each record represents an SLA policy event and additional sla breach and achievement metrics. Calendar and business hour SLA breaches are supported.

Many of the above reports are now configurable for [visualization via Streamlit](https://github.com/fivetran/streamlit_zendesk)! Check out some [sample reports here](https://fivetran-zendesk.streamlit.app/).

<!--section-end-->

# 🎯 How do I use the dbt package?
Expand Down
2 changes: 1 addition & 1 deletion dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'zendesk'
version: '0.14.0'
version: '0.15.0'


config-version: 2
Expand Down
2 changes: 1 addition & 1 deletion docs/catalog.json

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/manifest.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/run_results.json

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions integration_tests/ci/sample.profiles.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@ integration_tests:
pass: "{{ env_var('CI_REDSHIFT_DBT_PASS') }}"
dbname: "{{ env_var('CI_REDSHIFT_DBT_DBNAME') }}"
port: 5439
schema: zendesk_integration_tests_41
schema: zendesk_integration_tests_50
threads: 8
bigquery:
type: bigquery
method: service-account-json
project: 'dbt-package-testing'
schema: zendesk_integration_tests_41
schema: zendesk_integration_tests_50
threads: 8
keyfile_json: "{{ env_var('GCLOUD_SERVICE_KEY') | as_native }}"
snowflake:
Expand All @@ -33,7 +33,7 @@ integration_tests:
role: "{{ env_var('CI_SNOWFLAKE_DBT_ROLE') }}"
database: "{{ env_var('CI_SNOWFLAKE_DBT_DATABASE') }}"
warehouse: "{{ env_var('CI_SNOWFLAKE_DBT_WAREHOUSE') }}"
schema: zendesk_integration_tests_41
schema: zendesk_integration_tests_50
threads: 8
postgres:
type: postgres
Expand All @@ -42,13 +42,13 @@ integration_tests:
pass: "{{ env_var('CI_POSTGRES_DBT_PASS') }}"
dbname: "{{ env_var('CI_POSTGRES_DBT_DBNAME') }}"
port: 5432
schema: zendesk_integration_tests_41
schema: zendesk_integration_tests_50
threads: 8
databricks:
catalog: "{{ env_var('CI_DATABRICKS_DBT_CATALOG') }}"
host: "{{ env_var('CI_DATABRICKS_DBT_HOST') }}"
http_path: "{{ env_var('CI_DATABRICKS_DBT_HTTP_PATH') }}"
schema: zendesk_integration_tests_41
schema: zendesk_integration_tests_50
threads: 8
token: "{{ env_var('CI_DATABRICKS_DBT_TOKEN') }}"
type: databricks
4 changes: 2 additions & 2 deletions integration_tests/dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
config-version: 2

name: 'zendesk_integration_tests'
version: '0.14.0'
version: '0.15.0'

profile: 'integration_tests'

vars:
zendesk_schema: zendesk_integration_tests_41
zendesk_schema: zendesk_integration_tests_50
zendesk_source:
zendesk_organization_identifier: "organization_data"
zendesk_schedule_identifier: "schedule_data"
Expand Down
19 changes: 15 additions & 4 deletions models/intermediate/int_zendesk__schedule_spine.sql
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,21 @@ with timezone as (
select *
from {{ var('schedule') }}

), schedule_holiday as (
-- in the below CTE we want to explode out each holiday period into individual days, to prevent potential fanouts downstream in joins to schedules.
), schedule_holiday as (

select *
from {{ var('schedule_holiday') }}
select
_fivetran_synced,
cast(date_day as {{ dbt.type_timestamp() }} ) as holiday_start_date_at, -- For each day within a holiday we want to give it its own record. In the later CTE holiday_start_end_times, we transform these timestamps into minutes-from-beginning-of-the-week.
cast(date_day as {{ dbt.type_timestamp() }} ) as holiday_end_date_at, -- Since each day within a holiday now gets its own record, the end_date will then be the same day as the start_date. In the later CTE holiday_start_end_times, we transform these timestamps into minutes-from-beginning-of-the-week.
holiday_id,
holiday_name,
schedule_id

from {{ var('schedule_holiday') }}
inner join {{ ref('int_zendesk__calendar_spine') }}
on holiday_start_date_at <= cast(date_day as {{ dbt.type_timestamp() }} )
and holiday_end_date_at >= cast(date_day as {{ dbt.type_timestamp() }} )

), timezone_with_dt as (

Expand Down Expand Up @@ -350,4 +361,4 @@ with timezone as (
)

select *
from final
from final
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,35 @@ with ticket_schedules as (
select *
from {{ ref('int_zendesk__sla_policy_applied') }}

), users as (

select *
from {{ ref('int_zendesk__user_aggregates') }}

), ticket_updates as (

select *
from {{ ref('int_zendesk__updates') }}

), ticket_solved_times as (
select
ticket_id,
valid_starting_at as solved_at
from ticket_updates
where field_name = 'status'
and value in ('solved','closed')

), reply_time as (
select
ticket_comment.ticket_id,
ticket_comment.valid_starting_at as reply_at,
commenter.role
from ticket_updates as ticket_comment
join users as commenter
on commenter.user_id = ticket_comment.user_id
where field_name = 'comment'
and ticket_comment.is_public
and commenter.role in ('agent','admin')

), schedule_business_hours as (

Expand Down Expand Up @@ -48,20 +77,53 @@ with ticket_schedules as (
on ticket_schedules.schedule_id = schedule_business_hours.schedule_id
where sla_policy_applied.in_business_hours
and metric in ('next_reply_time', 'first_reply_time')


), first_reply_solve_times as (
select
ticket_sla_applied_with_schedules.ticket_id,
ticket_sla_applied_with_schedules.ticket_created_at,
ticket_sla_applied_with_schedules.valid_starting_at,
ticket_sla_applied_with_schedules.ticket_current_status,
ticket_sla_applied_with_schedules.metric,
ticket_sla_applied_with_schedules.latest_sla,
ticket_sla_applied_with_schedules.sla_applied_at,
ticket_sla_applied_with_schedules.target,
ticket_sla_applied_with_schedules.in_business_hours,
ticket_sla_applied_with_schedules.sla_policy_name,
ticket_sla_applied_with_schedules.schedule_id,
ticket_sla_applied_with_schedules.start_time_in_minutes_from_week,
ticket_sla_applied_with_schedules.total_schedule_weekly_business_minutes,
ticket_sla_applied_with_schedules.start_week_date,
min(reply_time.reply_at) as first_reply_time,
min(ticket_solved_times.solved_at) as first_solved_time
from ticket_sla_applied_with_schedules
left join reply_time
on reply_time.ticket_id = ticket_sla_applied_with_schedules.ticket_id
and reply_time.reply_at > ticket_sla_applied_with_schedules.sla_applied_at
left join ticket_solved_times
on ticket_sla_applied_with_schedules.ticket_id = ticket_solved_times.ticket_id
and ticket_solved_times.solved_at > ticket_sla_applied_with_schedules.sla_applied_at
{{ dbt_utils.group_by(n=14) }}

), week_index_calc as (
select
*,
{{ dbt.datediff("sla_applied_at", "least(coalesce(first_reply_time, " ~ dbt.current_timestamp() ~ "), coalesce(first_solved_time, " ~ dbt.current_timestamp() ~ "))", "week") }} + 1 as week_index
from first_reply_solve_times

), weeks as (

{{ dbt_utils.generate_series(208) }}
{{ dbt_utils.generate_series(52) }}

), weeks_cross_ticket_sla_applied as (
-- because time is reported in minutes since the beginning of the week, we have to split up time spent on the ticket into calendar weeks
select

ticket_sla_applied_with_schedules.*,
cast(generated_number - 1 as {{ dbt.type_int() }}) as week_number
select
week_index_calc.*,
cast(weeks.generated_number - 1 as {{ dbt.type_int() }}) as week_number

from ticket_sla_applied_with_schedules
from week_index_calc
cross join weeks
where week_index >= generated_number - 1

), weekly_periods as (

Expand All @@ -88,8 +150,8 @@ with ticket_schedules as (
and weekly_periods.schedule_id = schedule.schedule_id
-- this chooses the Daylight Savings Time or Standard Time version of the schedule
-- We have everything calculated within a week, so take us to the appropriate week first by adding the week_number * minutes-in-a-week to the minute-mark where we start and stop counting for the week
and cast( {{ dbt.dateadd(datepart='minute', interval='week_number * (7*24*60) + ticket_week_end_time', from_date_or_timestamp='start_week_date') }} as {{ dbt.type_timestamp() }}) > cast(schedule.valid_from as {{ dbt.type_timestamp() }})
and cast( {{ dbt.dateadd(datepart='minute', interval='week_number * (7*24*60) + ticket_week_start_time', from_date_or_timestamp='start_week_date') }} as {{ dbt.type_timestamp() }}) < cast(schedule.valid_until as {{ dbt.type_timestamp() }})
and cast ({{ dbt.dateadd(datepart='minute', interval='week_number * (7*24*60) + ticket_week_end_time', from_date_or_timestamp='start_week_date') }} as date) > cast(schedule.valid_from as date)
and cast ({{ dbt.dateadd(datepart='minute', interval='week_number * (7*24*60) + ticket_week_start_time', from_date_or_timestamp='start_week_date') }} as date) < cast(schedule.valid_until as date)

), intercepted_periods_with_breach_flag as (

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,11 +107,6 @@ with reply_time_calendar_hours_sla as (
and ticket_solved_times.solved_at > reply_time_breached_at.sla_applied_at
{{ dbt_utils.group_by(n=10) }}

{% if var('using_schedules', True) %}
having (in_business_hours and week_number <= min({{ dbt.datediff("reply_time_breached_at.sla_applied_at", "coalesce(reply_time.reply_at, ticket_solved_times.solved_at, " ~ dbt.current_timestamp() ~ ")", 'week') }}))
or not in_business_hours
{% endif %}

), lagging_time_block as (
select
*,
Expand Down

0 comments on commit c71cbe7

Please sign in to comment.