Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add timezone to kube_cronjob_info / Make kube_cronjob_next_schedule_time timezone-aware #2376

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/metrics/workload/cronjob-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
| Metric name | Metric type | Description | Labels/tags | Status |
| ---------------------------------------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| kube_cronjob_annotations | Gauge | Kubernetes annotations converted to Prometheus labels controlled via [--metric-annotations-allowlist](../../developer/cli-arguments.md) | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; <br> `annotation_CRONJOB_ANNOTATION`=&lt;CRONJOB_ANNOTATION&gt; | EXPERIMENTAL |
| kube_cronjob_info | Gauge | | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; <br> `schedule`=&lt;schedule&gt; <br> `concurrency_policy`=&lt;concurrency-policy&gt; | STABLE |
| kube_cronjob_info | Gauge | | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; <br> `schedule`=&lt;schedule&gt; <br> `concurrency_policy`=&lt;concurrency-policy&gt; <br> `timezone`=&lt;timezone&gt; | STABLE |
| kube_cronjob_labels | Gauge | Kubernetes labels converted to Prometheus labels controlled via [--metric-labels-allowlist](../../developer/cli-arguments.md) | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; <br> `label_CRONJOB_LABEL`=&lt;CRONJOB_LABEL&gt; | STABLE |
| kube_cronjob_created | Gauge | | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; | STABLE |
| kube_cronjob_next_schedule_time | Gauge | | `cronjob`=&lt;cronjob-name&gt; <br> `namespace`=&lt;cronjob-namespace&gt; | STABLE |
Expand Down
16 changes: 12 additions & 4 deletions internal/store/cronjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -96,11 +96,15 @@ func cronJobMetricFamilies(allowAnnotationsList, allowLabelsList []string) []gen
basemetrics.STABLE,
"",
wrapCronJobFunc(func(j *batchv1.CronJob) *metric.Family {
timeZone := "local"
if j.Spec.TimeZone != nil {
timeZone = *j.Spec.TimeZone
}
return &metric.Family{
Metrics: []*metric.Metric{
{
LabelKeys: []string{"schedule", "concurrency_policy"},
LabelValues: []string{j.Spec.Schedule, string(j.Spec.ConcurrencyPolicy)},
LabelKeys: []string{"schedule", "concurrency_policy", "timezone"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't it be better in its own metric rather than in the info metric?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a 1:1 relationship, why would you prefer a dedicated metric for it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgrisonnet any feedback here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_info is not a very intuitive suffix which is why I don't really like these metrics where we just end up dumping a bunch of labels that aren't really related between one another.

I don't know what was the historical reasoning for putting labels into these metrics, but I can also see some dedicated metrics for values part of objects spec:

func createPodRestartPolicyFamilyGenerator() generator.FamilyGenerator {
return *generator.NewFamilyGeneratorWithStability(
"kube_pod_restart_policy",
"Describes the restart policy in use by this pod.",
metric.Gauge,
basemetrics.STABLE,
"",
wrapPodFunc(func(p *v1.Pod) *metric.Family {
return &metric.Family{
Metrics: []*metric.Metric{
{
LabelKeys: []string{"type"},
LabelValues: []string{string(p.Spec.RestartPolicy)},
Value: float64(1),
},
},
}
}),
)
}

To me, info should be limited to the default labels. Any other information that we want to expose should be in its own dedicated metric. What is your opinion on that?

Copy link
Member Author

@mrueg mrueg May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree on having a separate metric for anything that does not have a 1:n relationship within the object. For 1:1 relationships, additional time series make it difficult to correlate between multiple labels vs. simply extracting the ones you need:
max(resource_info) by (label1, label2, label3).
Trying to do the same with multiple metrics can get very annoying and more error-prone.

To me, the _info metric should describe all single-value keys of the object that are static over its lifetime and non-numeric.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, there seems to be a naming discrepancy between per-property gauges and per-object info metrics, so there's no clear pattern we can adhere to.

That being said, accumulating all cardinally-bound fields as label-sets into a per-object _info metric opens the room for metrics with a large number of labels. The metric itself can grow to become unintentionally cardinal and dropping unwanted labels will entail relabelling efforts. Splitting this into per-field metrics will limit the cardinality and allow for a more granular control.

Users should be able to add the timezone label to any exported metric using a foo_metric * on(job) group_left(timezone) kube_cronjob_timezone.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to define it here:
the _info metric should describe all single-value keys of the object that are static over its lifetime and non-numeric

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant a documentation/standard of sorts in the repository that enforces this. Not sure if there is one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add that in a follow-up PR.

LabelValues: []string{j.Spec.Schedule, string(j.Spec.ConcurrencyPolicy), timeZone},
Value: 1,
},
},
Expand Down Expand Up @@ -245,7 +249,7 @@ func cronJobMetricFamilies(allowAnnotationsList, allowLabelsList []string) []gen
ms := []*metric.Metric{}

// If the cron job is suspended, don't track the next scheduled time
nextScheduledTime, err := getNextScheduledTime(j.Spec.Schedule, j.Status.LastScheduleTime, j.CreationTimestamp)
nextScheduledTime, err := getNextScheduledTime(j.Spec.Schedule, j.Status.LastScheduleTime, j.CreationTimestamp, j.Spec.TimeZone)
if err != nil {
panic(err)
} else if !*j.Spec.Suspend {
Expand Down Expand Up @@ -347,7 +351,11 @@ func createCronJobListWatch(kubeClient clientset.Interface, ns string, fieldSele
}
}

func getNextScheduledTime(schedule string, lastScheduleTime *metav1.Time, createdTime metav1.Time) (time.Time, error) {
func getNextScheduledTime(schedule string, lastScheduleTime *metav1.Time, createdTime metav1.Time, timeZone *string) (time.Time, error) {
if timeZone != nil {
schedule = fmt.Sprintf("CRON_TZ=%s %s", *timeZone, schedule)
}

sched, err := cron.ParseStandard(schedule)
if err != nil {
return time.Time{}, fmt.Errorf("Failed to parse cron job schedule '%s': %w", schedule, err)
Expand Down
218 changes: 174 additions & 44 deletions internal/store/cronjob_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,68 +40,163 @@ var (
ActiveRunningCronJob1LastScheduleTime = time.Unix(1520742896, 0)
SuspendedCronJob1LastScheduleTime = time.Unix(1520742896+5.5*3600, 0) // 5.5 hours later
ActiveCronJob1NoLastScheduledCreationTimestamp = time.Unix(1520742896+6.5*3600, 0)
TimeZone = "Asia/Shanghai"
)

func TestCronJobStore(t *testing.T) {
hour := ActiveRunningCronJob1LastScheduleTime.Hour()
ActiveRunningCronJob1NextScheduleTime := time.Time{}
func calculateNextSchedule6h(timestamp time.Time, timezone string) time.Time {
loc, _ := time.LoadLocation(timezone)
hour := timestamp.In(loc).Hour()
switch {
case hour < 6:
ActiveRunningCronJob1NextScheduleTime = time.Date(
ActiveRunningCronJob1LastScheduleTime.Year(),
ActiveRunningCronJob1LastScheduleTime.Month(),
ActiveRunningCronJob1LastScheduleTime.Day(),
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day(),
6,
0,
0, 0, time.Local)
0, 0, loc)
case hour < 12:
ActiveRunningCronJob1NextScheduleTime = time.Date(
ActiveRunningCronJob1LastScheduleTime.Year(),
ActiveRunningCronJob1LastScheduleTime.Month(),
ActiveRunningCronJob1LastScheduleTime.Day(),
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day(),
12,
0,
0, 0, time.Local)
0, 0, loc)
case hour < 18:
ActiveRunningCronJob1NextScheduleTime = time.Date(
ActiveRunningCronJob1LastScheduleTime.Year(),
ActiveRunningCronJob1LastScheduleTime.Month(),
ActiveRunningCronJob1LastScheduleTime.Day(),
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day(),
18,
0,
0, 0, time.Local)
case hour < 24:
ActiveRunningCronJob1NextScheduleTime = time.Date(
ActiveRunningCronJob1LastScheduleTime.Year(),
ActiveRunningCronJob1LastScheduleTime.Month(),
ActiveRunningCronJob1LastScheduleTime.Day(),
24,
0, 0, loc)
default:
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day()+1,
0,
0,
0, 0, time.Local)
0, 0, loc)
}
}

minute := ActiveCronJob1NoLastScheduledCreationTimestamp.Minute()
ActiveCronJob1NoLastScheduledNextScheduleTime := time.Time{}
func calculateNextSchedule25m(timestamp time.Time, timezone string) time.Time {
loc, _ := time.LoadLocation(timezone)
minute := timestamp.In(loc).Minute()
switch {
case minute < 25:
ActiveCronJob1NoLastScheduledNextScheduleTime = time.Date(
ActiveCronJob1NoLastScheduledCreationTimestamp.Year(),
ActiveCronJob1NoLastScheduledCreationTimestamp.Month(),
ActiveCronJob1NoLastScheduledCreationTimestamp.Day(),
ActiveCronJob1NoLastScheduledCreationTimestamp.Hour(),
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day(),
timestamp.Hour(),
25,
0, 0, time.Local)
0, 0, loc)
default:
ActiveCronJob1NoLastScheduledNextScheduleTime = time.Date(
ActiveCronJob1NoLastScheduledNextScheduleTime.Year(),
ActiveCronJob1NoLastScheduledNextScheduleTime.Month(),
ActiveCronJob1NoLastScheduledNextScheduleTime.Day(),
ActiveCronJob1NoLastScheduledNextScheduleTime.Hour()+1,
return time.Date(
timestamp.Year(),
timestamp.Month(),
timestamp.Day(),
timestamp.Hour()+1,
25,
0, 0, time.Local)
0, 0, loc)
}

}
func TestCronJobStore(t *testing.T) {

ActiveRunningCronJob1NextScheduleTime := calculateNextSchedule6h(ActiveRunningCronJob1LastScheduleTime, "Local")
ActiveRunningCronJobWithTZ1NextScheduleTime := calculateNextSchedule6h(ActiveRunningCronJob1LastScheduleTime, TimeZone)

ActiveCronJob1NoLastScheduledNextScheduleTime := calculateNextSchedule25m(ActiveCronJob1NoLastScheduledCreationTimestamp, "Local")

cases := []generateMetricsTestCase{
{
AllowAnnotationsList: []string{
"app.k8s.io/owner",
},
Obj: &batchv1.CronJob{
ObjectMeta: metav1.ObjectMeta{
Name: "ActiveRunningCronJobWithTZ1",
Namespace: "ns1",
Generation: 1,
ResourceVersion: "11111",
Labels: map[string]string{
"app": "example-active-running-with-tz-1",
},
Annotations: map[string]string{
"app": "mysql-server",
"app.k8s.io/owner": "@foo",
},
},
Status: batchv1.CronJobStatus{
Active: []v1.ObjectReference{{Name: "FakeJob1"}, {Name: "FakeJob2"}},
LastScheduleTime: &metav1.Time{Time: ActiveRunningCronJob1LastScheduleTime},
LastSuccessfulTime: nil,
},
Spec: batchv1.CronJobSpec{
StartingDeadlineSeconds: &StartingDeadlineSeconds300,
ConcurrencyPolicy: "Forbid",
Suspend: &SuspendFalse,
Schedule: "0 */6 * * *",
SuccessfulJobsHistoryLimit: &SuccessfulJobHistoryLimit3,
FailedJobsHistoryLimit: &FailedJobHistoryLimit1,
TimeZone: &TimeZone,
},
},
Want: `
# HELP kube_cronjob_created [STABLE] Unix creation timestamp
# HELP kube_cronjob_info [STABLE] Info about cronjob.
# HELP kube_cronjob_annotations Kubernetes annotations converted to Prometheus labels.
# HELP kube_cronjob_labels [STABLE] Kubernetes labels converted to Prometheus labels.
# HELP kube_cronjob_next_schedule_time [STABLE] Next time the cronjob should be scheduled. The time after lastScheduleTime, or after the cron job's creation time if it's never been scheduled. Use this to determine if the job is delayed.
# HELP kube_cronjob_spec_failed_job_history_limit Failed job history limit tells the controller how many failed jobs should be preserved.
# HELP kube_cronjob_spec_starting_deadline_seconds [STABLE] Deadline in seconds for starting the job if it misses scheduled time for any reason.
# HELP kube_cronjob_spec_successful_job_history_limit Successful job history limit tells the controller how many completed jobs should be preserved.
# HELP kube_cronjob_spec_suspend [STABLE] Suspend flag tells the controller to suspend subsequent executions.
# HELP kube_cronjob_status_active [STABLE] Active holds pointers to currently running jobs.
# HELP kube_cronjob_metadata_resource_version [STABLE] Resource version representing a specific version of the cronjob.
# HELP kube_cronjob_status_last_schedule_time [STABLE] LastScheduleTime keeps information of when was the last time the job was successfully scheduled.
# TYPE kube_cronjob_created gauge
# TYPE kube_cronjob_info gauge
# TYPE kube_cronjob_annotations gauge
# TYPE kube_cronjob_labels gauge
# TYPE kube_cronjob_next_schedule_time gauge
# TYPE kube_cronjob_spec_failed_job_history_limit gauge
# TYPE kube_cronjob_spec_starting_deadline_seconds gauge
# TYPE kube_cronjob_spec_successful_job_history_limit gauge
# TYPE kube_cronjob_spec_suspend gauge
# TYPE kube_cronjob_status_active gauge
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_status_last_schedule_time gauge
kube_cronjob_info{concurrency_policy="Forbid",cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1",schedule="0 */6 * * *",timezone="Asia/Shanghai"} 1
kube_cronjob_annotations{annotation_app_k8s_io_owner="@foo",cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 1
kube_cronjob_spec_failed_job_history_limit{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 1
kube_cronjob_spec_starting_deadline_seconds{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 300
kube_cronjob_spec_successful_job_history_limit{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 3
kube_cronjob_spec_suspend{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 0
kube_cronjob_status_active{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 2
kube_cronjob_metadata_resource_version{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 11111
kube_cronjob_status_last_schedule_time{cronjob="ActiveRunningCronJobWithTZ1",namespace="ns1"} 1.520742896e+09
` + fmt.Sprintf("kube_cronjob_next_schedule_time{cronjob=\"ActiveRunningCronJobWithTZ1\",namespace=\"ns1\"} %ve+09\n",
float64(ActiveRunningCronJobWithTZ1NextScheduleTime.Unix())/math.Pow10(9)),
MetricNames: []string{
"kube_cronjob_next_schedule_time",
"kube_cronjob_spec_starting_deadline_seconds",
"kube_cronjob_status_active",
"kube_cronjob_metadata_resource_version",
"kube_cronjob_spec_suspend",
"kube_cronjob_info",
"kube_cronjob_created",
"kube_cronjob_annotations",
"kube_cronjob_labels",
"kube_cronjob_status_last_schedule_time",
"kube_cronjob_spec_successful_job_history_limit",
"kube_cronjob_spec_failed_job_history_limit",
},
},
{
AllowAnnotationsList: []string{
"app.k8s.io/owner",
Expand Down Expand Up @@ -159,7 +254,7 @@ func TestCronJobStore(t *testing.T) {
# TYPE kube_cronjob_status_active gauge
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_status_last_schedule_time gauge
kube_cronjob_info{concurrency_policy="Forbid",cronjob="ActiveRunningCronJob1",namespace="ns1",schedule="0 */6 * * *"} 1
kube_cronjob_info{concurrency_policy="Forbid",cronjob="ActiveRunningCronJob1",namespace="ns1",schedule="0 */6 * * *",timezone="local"} 1
kube_cronjob_annotations{annotation_app_k8s_io_owner="@foo",cronjob="ActiveRunningCronJob1",namespace="ns1"} 1
kube_cronjob_spec_failed_job_history_limit{cronjob="ActiveRunningCronJob1",namespace="ns1"} 1
kube_cronjob_spec_starting_deadline_seconds{cronjob="ActiveRunningCronJob1",namespace="ns1"} 300
Expand Down Expand Up @@ -206,6 +301,7 @@ func TestCronJobStore(t *testing.T) {
ConcurrencyPolicy: "Forbid",
Suspend: &SuspendTrue,
Schedule: "0 */3 * * *",
TimeZone: &TimeZone,
SuccessfulJobsHistoryLimit: &SuccessfulJobHistoryLimit3,
FailedJobsHistoryLimit: &FailedJobHistoryLimit1,
},
Expand Down Expand Up @@ -233,7 +329,7 @@ func TestCronJobStore(t *testing.T) {
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_status_last_schedule_time gauge
# TYPE kube_cronjob_status_last_successful_time gauge
kube_cronjob_info{concurrency_policy="Forbid",cronjob="SuspendedCronJob1",namespace="ns1",schedule="0 */3 * * *"} 1
kube_cronjob_info{concurrency_policy="Forbid",cronjob="SuspendedCronJob1",namespace="ns1",schedule="0 */3 * * *",timezone="Asia/Shanghai"} 1
kube_cronjob_spec_failed_job_history_limit{cronjob="SuspendedCronJob1",namespace="ns1"} 1
kube_cronjob_spec_starting_deadline_seconds{cronjob="SuspendedCronJob1",namespace="ns1"} 300
kube_cronjob_spec_successful_job_history_limit{cronjob="SuspendedCronJob1",namespace="ns1"} 3
Expand Down Expand Up @@ -292,7 +388,7 @@ func TestCronJobStore(t *testing.T) {
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_status_last_schedule_time gauge
# TYPE kube_cronjob_status_last_successful_time gauge
kube_cronjob_info{concurrency_policy="Forbid",cronjob="SuspendedCronJob1",namespace="ns1",schedule="0 */3 * * *"} 1
kube_cronjob_info{concurrency_policy="Forbid",cronjob="SuspendedCronJob1",namespace="ns1",schedule="0 */3 * * *",timezone="local"} 1
kube_cronjob_spec_failed_job_history_limit{cronjob="SuspendedCronJob1",namespace="ns1"} 1
kube_cronjob_spec_starting_deadline_seconds{cronjob="SuspendedCronJob1",namespace="ns1"} 300
kube_cronjob_spec_successful_job_history_limit{cronjob="SuspendedCronJob1",namespace="ns1"} 3
Expand Down Expand Up @@ -351,15 +447,15 @@ func TestCronJobStore(t *testing.T) {
# TYPE kube_cronjob_spec_successful_job_history_limit gauge
# TYPE kube_cronjob_spec_suspend gauge
# TYPE kube_cronjob_status_active gauge
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_metadata_resource_version gauge
# TYPE kube_cronjob_status_last_successful_time gauge
kube_cronjob_spec_starting_deadline_seconds{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 300
kube_cronjob_status_active{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 0
kube_cronjob_metadata_resource_version{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 33333
kube_cronjob_spec_failed_job_history_limit{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 1
kube_cronjob_spec_successful_job_history_limit{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 3
kube_cronjob_spec_suspend{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 0
kube_cronjob_info{concurrency_policy="Forbid",cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1",schedule="25 * * * *"} 1
kube_cronjob_info{concurrency_policy="Forbid",cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1",schedule="25 * * * *",timezone="local"} 1
kube_cronjob_created{cronjob="ActiveCronJob1NoLastScheduled",namespace="ns1"} 1.520766296e+09
` +
fmt.Sprintf("kube_cronjob_next_schedule_time{cronjob=\"ActiveCronJob1NoLastScheduled\",namespace=\"ns1\"} %ve+09\n",
Expand All @@ -375,3 +471,37 @@ func TestCronJobStore(t *testing.T) {
}
}
}

func TestGetNextScheduledTime(t *testing.T) {

testCases := []struct {
schedule string
lastScheduleTime metav1.Time
createdTime metav1.Time
timeZone string
expected time.Time
}{
{
schedule: "0 */6 * * *",
lastScheduleTime: metav1.Time{Time: ActiveRunningCronJob1LastScheduleTime},
createdTime: metav1.Time{Time: ActiveRunningCronJob1LastScheduleTime},
timeZone: "UTC",
expected: ActiveRunningCronJob1LastScheduleTime.Add(time.Second*4 + time.Minute*25 + time.Hour),
},
{
schedule: "0 */6 * * *",
lastScheduleTime: metav1.Time{Time: ActiveRunningCronJob1LastScheduleTime},
createdTime: metav1.Time{Time: ActiveRunningCronJob1LastScheduleTime},
timeZone: TimeZone,
expected: ActiveRunningCronJob1LastScheduleTime.Add(time.Second*4 + time.Minute*25 + time.Hour*5),
},
}

for _, test := range testCases {
actual, _ := getNextScheduledTime(test.schedule, &test.lastScheduleTime, test.createdTime, &test.timeZone) // #nosec G601
if !actual.Equal(test.expected) {
t.Fatalf("%v: expected %v, actual %v", test.schedule, test.expected, actual)
}
}

}