Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Merge adjacent downtime segments #6579

Closed
wants to merge 7 commits into from

Conversation

efuss
Copy link
Contributor

@efuss efuss commented Aug 29, 2018

Put an already running downtime in effect immediately:
If Icinga2 was restarted with a newly configured downtime that should
be in effect at the time of restart, the should-be-running segment of
it was not put into effect.

Merge adjacent downtime segments:
As legacy time periods can't span midnight, a configured downtime
spanning midnight is technically two (immediately adjecent) segments.
As segments were queued individually, at midnight, the downtime
technically ended (sending a DowntimeEnd) only to start immediately
again (sending a DowntimeStart notification).
With this fix, an immediately following segment is merged into the
current one in case the current one is ending soon (where "soon" is
defined as "12 hours or less"). The time limit is arbitrary, but
necessary to prevent endless merging in case of a 7*24 downtime.

Note that the diff of scheduleddowntime.cpp looks weird because it
inserts a new FindRunningSegment() function in front of FindNextSegment()
and the initial lines of bot functions look sufficiently similar to make
diff believe FindNextSegment() got changed.

Put running downtimes in effect and merge segments

Put an already running downtime in effect immediately:
If Icinga2 was restarted with a newly configured downtime that should
be in effect at the time of restart, the should-be-running segment of
it was not put into effect.

Merge adjacent downtime segments:
As legacy time periods can't span midnight, a configured downtime
spanning midnight is technically two (immediately adjecent) segments.
As segments were queued individually, at midnight, the downtime
technically ended (sending a DowntimeEnd) only to start immediately
again (sending a DowntimeStart notification). 
With this fix, an immediately following segment is merged into the
current one in case the current one is ending soon (where "soon" is
defined as "12 hours or less"). The time limit is arbitrary, but
necessary to prevent endless merging in case of a 7*24 downtime.
@mcktr
Copy link
Member

mcktr commented Sep 17, 2018

Test

Create a new ScheduledDowntime which is already running. Ensure that the downtime is in effect immediately.

object Host "test-host-17" {
	address = "127.0.0.1"

	check_command = "icmp"

	check_interval = 1s
	retry_interval = 1s
}

object ScheduledDowntime "test-time-102" {
	host_name = "test-host-17"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-09-17" = "21:12-23:00"
	}
}

To verify that the downtime is immediately in effect I queried the API.

API: https://127.0.0.1:5665/v1/objects/downtimes

The Downtime is created.

{
  "results": [
    {
      "attrs": {
        "__name": "test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684",
        "active": true,
        "author": "icingaadmin",
        "comment": "Some comment",
        "config_owner": "test-host-17!test-time-102",
        "duration": 0.0,
        "end_time": 1537218000.0,
        "entry_time": 1537211501.8049991131,
        "fixed": true,
        "ha_mode": 0.0,
        "host_name": "test-host-17",
        "legacy_id": 1.0,
        "name": "de62b38e-4701-4912-b7e8-4b2ee3ca3684",
        "original_attributes": null,
        "package": "_api",
        "paused": false,
        "scheduled_by": "test-host-17!test-time-102",
        "service_name": "",
        "source_location": {
          "first_column": 0.0,
          "first_line": 1.0,
          "last_column": 69.0,
          "last_line": 1.0,
          "path": "/usr/local/icinga2/var/lib/icinga2/api/packages/_api/af31cc1a-a1fe-4cd2-9295-414903dc953f/conf.d/downtimes/test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684.conf"
        },
        "start_time": 1537211520.0,
        "templates": [
          "de62b38e-4701-4912-b7e8-4b2ee3ca3684"
        ],
        "trigger_time": 1537211521.8073070049,
        "triggered_by": "",
        "triggers": [
          
        ],
        "type": "Downtime",
        "version": 1537211501.805038929,
        "was_cancelled": false,
        "zone": ""
      },
      "joins": {
        
      },
      "meta": {
        
      },
      "name": "test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684",
      "type": "Downtime"
    }
  ]
}

API: https://127.0.0.1:5665/v1/objects/hosts

Notice the downtime_depth attribute. The host has 1 Downtime which is in effect.

{
  "results": [
    {
      "attrs": {
        "__name": "test-host-17",
        "acknowledgement": 0.0,
        "acknowledgement_expiry": 0.0,
        "action_url": "",
        "active": true,
        "address": "127.0.0.1",
        "address6": "",
        "check_attempt": 1.0,
        "check_command": "icmp",
        "check_interval": 1.0,
        "check_period": "",
        "check_timeout": null,
        "command_endpoint": "",
        "display_name": "test-host-17",
        "downtime_depth": 1.0,
        "enable_active_checks": true,
        "enable_event_handler": true,
        "enable_flapping": false,
        "enable_notifications": true,
        "enable_passive_checks": true,
        "enable_perfdata": true,
        "event_command": "",
        "flapping": false,
        "flapping_current": 0.0,
        "flapping_last_change": 0.0,
        "flapping_threshold": 0.0,
        "flapping_threshold_high": 30.0,
        "flapping_threshold_low": 25.0,
        "force_next_check": false,
        "force_next_notification": false,
        "groups": [
          
        ],
        "ha_mode": 0.0,
        "icon_image": "",
        "icon_image_alt": "",
        "last_check": 1537211631.3775560856,
        "last_check_result": {
          "active": true,
          "check_source": "metis",
          "command": [
            "/usr/lib/nagios/plugins/check_icmp",
            "-c",
            "200,15%",
            "-w",
            "100,5%",
            "-H",
            "127.0.0.1"
          ],
          "execution_end": 1537211631.3773798943,
          "execution_start": 1537211631.3726069927,
          "exit_status": 0.0,
          "output": "OK - 127.0.0.1: rta 0.025ms, lost 0%",
          "performance_data": [
            "rta=0.025ms;100.000;200.000;0;",
            "pl=0%;5;15;;",
            "rtmax=0.075ms;;;;",
            "rtmin=0.011ms;;;;"
          ],
          "schedule_end": 1537211631.3775560856,
          "schedule_start": 1537211631.3709609509,
          "state": 0.0,
          "ttl": 0.0,
          "type": "CheckResult",
          "vars_after": {
            "attempt": 1.0,
            "reachable": true,
            "state": 0.0,
            "state_type": 1.0
          },
          "vars_before": {
            "attempt": 1.0,
            "reachable": true,
            "state": 0.0,
            "state_type": 1.0
          }
        },
        "last_hard_state": 0.0,
        "last_hard_state_change": 1537211502.6487550735,
        "last_reachable": true,
        "last_state": 0.0,
        "last_state_change": 1537211502.6487550735,
        "last_state_down": 0.0,
        "last_state_type": 1.0,
        "last_state_unreachable": 0.0,
        "last_state_up": 1537211631.3776059151,
        "max_check_attempts": 3.0,
        "name": "test-host-17",
        "next_check": 1537211632.3776650429,
        "notes": "",
        "notes_url": "",
        "original_attributes": null,
        "package": "_etc",
        "paused": false,
        "retry_interval": 1.0,
        "severity": 1.0,
        "source_location": {
          "first_column": 0.0,
          "first_line": 1.0,
          "last_column": 25.0,
          "last_line": 1.0,
          "path": "/usr/local/icinga2/etc/icinga2/devel/test.conf"
        },
        "state": 0.0,
        "state_type": 1.0,
        "templates": [
          "test-host-17"
        ],
        "type": "Host",
        "vars": null,
        "version": 0.0,
        "volatile": false,
        "zone": ""
      },
      "joins": {
        
      },
      "meta": {
        
      },
      "name": "test-host-17",
      "type": "Host"
    }
  ]
}

Problem

After a while there are multiple Downtimes created.

API: https://127.0.0.1:5665/v1/objects/downtimes

There are multiple entries.

{
  "results": [
    {
      "attrs": {
        "__name": "test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684",
        "active": true,
        "author": "icingaadmin",
        "comment": "Some comment",
        "config_owner": "test-host-17!test-time-102",
        "duration": 0.0,
        "end_time": 1537218000.0,
        "entry_time": 1537211501.8049991131,
        "fixed": true,
        "ha_mode": 0.0,
        "host_name": "test-host-17",
        "legacy_id": 1.0,
        "name": "de62b38e-4701-4912-b7e8-4b2ee3ca3684",
        "original_attributes": null,
        "package": "_api",
        "paused": false,
        "scheduled_by": "test-host-17!test-time-102",
        "service_name": "",
        "source_location": {
          "first_column": 0.0,
          "first_line": 1.0,
          "last_column": 69.0,
          "last_line": 1.0,
          "path": "/usr/local/icinga2/var/lib/icinga2/api/packages/_api/af31cc1a-a1fe-4cd2-9295-414903dc953f/conf.d/downtimes/test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684.conf"
        },
        "start_time": 1537211520.0,
        "templates": [
          "de62b38e-4701-4912-b7e8-4b2ee3ca3684"
        ],
        "trigger_time": 1537211521.8073070049,
        "triggered_by": "",
        "triggers": [
          
        ],
        "type": "Downtime",
        "version": 1537211501.805038929,
        "was_cancelled": false,
        "zone": ""
      },
      "joins": {
        
      },
      "meta": {
        
      },
      "name": "test-host-17!de62b38e-4701-4912-b7e8-4b2ee3ca3684",
      "type": "Downtime"
    },
    {
      "attrs": {
        "__name": "test-host-17!fa08e2ad-8d9d-4de6-977f-b55d12f75b93",
        "active": true,
        "author": "icingaadmin",
        "comment": "Some comment",
        "config_owner": "test-host-17!test-time-102",
        "duration": 0.0,
        "end_time": 1537218000.0,
        "entry_time": 1537211561.805934906,
        "fixed": true,
        "ha_mode": 0.0,
        "host_name": "test-host-17",
        "legacy_id": 2.0,
        "name": "fa08e2ad-8d9d-4de6-977f-b55d12f75b93",
        "original_attributes": null,
        "package": "_api",
        "paused": false,
        "scheduled_by": "test-host-17!test-time-102",
        "service_name": "",
        "source_location": {
          "first_column": 0.0,
          "first_line": 1.0,
          "last_column": 69.0,
          "last_line": 1.0,
          "path": "/usr/local/icinga2/var/lib/icinga2/api/packages/_api/af31cc1a-a1fe-4cd2-9295-414903dc953f/conf.d/downtimes/test-host-17!fa08e2ad-8d9d-4de6-977f-b55d12f75b93.conf"
        },
        "start_time": 1537211520.0,
        "templates": [
          "fa08e2ad-8d9d-4de6-977f-b55d12f75b93"
        ],
        "trigger_time": 1537211561.8158910275,
        "triggered_by": "",
        "triggers": [
          
        ],
        "type": "Downtime",
        "version": 1537211561.8060190678,
        "was_cancelled": false,
        "zone": ""
      },
      "joins": {
        
      },
      "meta": {
        
      },
      "name": "test-host-17!fa08e2ad-8d9d-4de6-977f-b55d12f75b93",
      "type": "Downtime"
    },
    {
      "attrs": {
        "__name": "test-host-17!3176080d-a1f0-4772-8ce1-968cc3e37fba",
        "active": true,
        "author": "icingaadmin",
        "comment": "Some comment",
        "config_owner": "test-host-17!test-time-102",
        "duration": 0.0,
        "end_time": 1537218000.0,
        "entry_time": 1537211621.8196120262,
        "fixed": true,
        "ha_mode": 0.0,
        "host_name": "test-host-17",
        "legacy_id": 3.0,
        "name": "3176080d-a1f0-4772-8ce1-968cc3e37fba",
        "original_attributes": null,
        "package": "_api",
        "paused": false,
        "scheduled_by": "test-host-17!test-time-102",
        "service_name": "",
        "source_location": {
          "first_column": 0.0,
          "first_line": 1.0,
          "last_column": 69.0,
          "last_line": 1.0,
          "path": "/usr/local/icinga2/var/lib/icinga2/api/packages/_api/af31cc1a-a1fe-4cd2-9295-414903dc953f/conf.d/downtimes/test-host-17!3176080d-a1f0-4772-8ce1-968cc3e37fba.conf"
        },
        "start_time": 1537211520.0,
        "templates": [
          "3176080d-a1f0-4772-8ce1-968cc3e37fba"
        ],
        "trigger_time": 1537211626.8038098812,
        "triggered_by": "",
        "triggers": [
          
        ],
        "type": "Downtime",
        "version": 1537211621.8197479248,
        "was_cancelled": false,
        "zone": ""
      },
      "joins": {
        
      },
      "meta": {
        
      },
      "name": "test-host-17!3176080d-a1f0-4772-8ce1-968cc3e37fba",
      "type": "Downtime"
    },
[...]
  ]
}

(truncated) (full)

Quering the host via API I can confirm that there are multiple downtimes in effect.

API: https://127.0.0.1:5665/v1/objects/hosts

Notice the downtime_depth attribute, there are 12 Downtimes active.

{
  "results": [
    {
      "attrs": {
        "__name": "test-host-17",
        "acknowledgement": 0.0,
        "acknowledgement_expiry": 0.0,
        "action_url": "",
        "active": true,
        "address": "127.0.0.1",
        "address6": "",
        "check_attempt": 1.0,
        "check_command": "icmp",
        "check_interval": 1.0,
        "check_period": "",
        "check_timeout": null,
        "command_endpoint": "",
        "display_name": "test-host-17",
        "downtime_depth": 12.0,
        "enable_active_checks": true,
        "enable_event_handler": true,
        "enable_flapping": false,
[...]
    }
  ]
}

(truncated) (full)

Looking into the debug log I can see multiple log messages like the following. All log messages are for the same ScheduledDowntime object.

[2018-09-17 21:34:42 +0200] debug/ScheduledDowntime: Creating new Downtime for ScheduledDowntime "test-host-17!test-time-102"

Short summary: The Downtime is re-created every minute, thereby we have multiple downtime objects after a while.

Copy link
Member

@mcktr mcktr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my test.

In ScheduledDowntime::FindRunningSegment(), only regard downtimes that last longer than minEnd, not at least as long.
Otherwise, a running downtime with a fixed start date will be queued over and over again.
Revert commit 406e5f2 as the patched file was from the wrong branch
In ScheduledDowntime::FindRunningSegment(), only regard downtimes that last longer than minEnd, not at least as long.
Otherwise, a running downtime with a fixed start date will be queued over and over again.
Revert commit 2e721ba since it put scheduleddowntime.cpp in the wrong place.
In ScheduledDowntime::FindRunningSegment(), only regard downtimes that last longer than minEnd, not at least as long.
Otherwise, a running downtime with a fixed start date will be queued over and over again.
@efuss
Copy link
Contributor Author

efuss commented Sep 18, 2018

Thanks for testing and reporting the failure.

There was a < comparison in ScheduledDowntime:FindRunningSegment() that should be a <=.
The problem didn't show up with me because there was always an adjacent segment to merge. I'm unsure why it didn't show up before I implemented the merging.

I'm also unsure as to whether GitHub automagically updated my Pull Request.
It took me less than 5 minutes to understand the problem from your report, 5 to 10 minutes to find the flaw in my change, seconds to correct it and about an hour fighting with GitHub to hopefully get the Pull Request right.

@dnsmichi
Copy link
Contributor

@mcktr Can you have look again please? I'd like to include this in 2.11 then.

@dnsmichi dnsmichi added bug Something isn't working area/notifications Notification events labels Oct 10, 2018
@mcktr
Copy link
Member

mcktr commented Oct 10, 2018

Yep, I will have a look in the next days.

@mcktr
Copy link
Member

mcktr commented Oct 15, 2018

Thanks for updating the PR.

Tests

Put running downtime in effect immediately

object Host "devel-host-001" {
	import "devel-host-template"

	address = "127.0.0.1"
}

object ScheduledDowntime "devel-downtime-001" {
	host_name = "devel-host-001"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-10-15" = "10:00-12:00"
	}
}

Downtime is in effect immediately after restart. Waiting for 10 minutes, there are no other downtimes created -> Good. This part of the patch works.

Merge adjacent downtime segments

object Host "devel-host-001" {
	import "devel-host-template"

	address = "127.0.0.1"
}

object ScheduledDowntime "devel-downtime-001" {
	host_name = "devel-host-001"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-10-15" = "10:00-11:30"
	}
}

object ScheduledDowntime "devel-downtime-002" {
	host_name = "devel-host-001"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-10-15" = "11:30-12:00"
	}
}

Watching the log to verify the merging.

[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Try merge
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: By us, ends soon (Mon Oct 15 11:30:00 2018)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Finding next scheduled downtime segment for time 1539595447 (minBegin Mon Oct 15 11:30:00 2018)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Evaluating segment: 2018-10-15: 10:00-11:30
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Next Segment doesn't fit: Thu Jan  1 01:00:00 1970 != Mon Oct 15 11:30:00 2018
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Not by us (devel-host-001!devel-downtime-002 != devel-host-001!devel-downtime-001)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: No merge
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Creating new Downtime for ScheduledDowntime "devel-host-001!devel-downtime-001"
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Finding running scheduled downtime segment for time 1539595447 (minEnd Mon Oct 15 12:00:00 2018)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Evaluating (running?) segment: 2018-10-15: 10:00-11:30
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Considering (running?) segment: Mon Oct 15 10:00:00 2018 -> Mon Oct 15 11:30:00 2018
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: ending too early.
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Finding next scheduled downtime segment for time 1539595447 (minBegin -)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Evaluating segment: 2018-10-15: 10:00-11:30
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Try merge
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Not by us (devel-host-001!devel-downtime-001 != devel-host-001!devel-downtime-002)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: By us, ends soon (Mon Oct 15 12:00:00 2018)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Finding next scheduled downtime segment for time 1539595447 (minBegin Mon Oct 15 12:00:00 2018)
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Evaluating segment: 2018-10-15: 11:30-12:00
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Considering segment: Mon Oct 15 11:30:00 2018 -> Mon Oct 15 12:00:00 2018
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: beginning to early.
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: Next Segment doesn't fit: Thu Jan  1 01:00:00 1970 != Mon Oct 15 12:00:00 2018
[2018-10-15 11:24:07 +0200] debug/ScheduledDowntime: No merge

The first merge attempt is against the Downtime itself, I guess we can safely skip this attempt.

The second merge attempt is against the adjacent downtime, which starts right after the initial downtime. But due to a date mismatch (Thu Jan 1 01:00:00 1970 != Mon Oct 15 12:00:00 2018) it doesn't fit and no merge will happen. I assume that some default date value is used since the date is equal to unixtime 0.

Please have a look into the second part of your patch.

@efuss
Copy link
Contributor Author

efuss commented Oct 15, 2018 via email

@mcktr
Copy link
Member

mcktr commented Oct 15, 2018

Thanks for the clarification. 👍

So the above example is correctly not merged together since this are two different downtimes.

More Tests

object ScheduledDowntime "devel-downtime-001" {
	host_name = "devel-host-001"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-10-15" = "17:55-18:00,18:00-18:05"
	}
}

If I understand you correctly the second segment should be merged into the first, since this is the same downtime with two adjacent segments, correct?

It does.

[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: Try merge
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: By us, ends soon (Mon Oct 15 18:00:00 2018)
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: Finding next scheduled downtime segment for time 1539618933 (minBegin Mon Oct 15 18:00:00 2018)
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: Evaluating segment: 2018-10-15: 17:55-18:00,18:00-18:05
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: Considering segment: Mon Oct 15 18:00:00 2018 -> Mon Oct 15 18:05:00 2018
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: (best match yet)
[2018-10-15 17:55:33 +0200] debug/ScheduledDowntime: Next Segment fits, extending end time Mon Oct 15 18:00:00 2018 to Mon Oct 15 18:05:00 2018

One thing I noticed in Icinga Web 2 during the tests:

downtimes - chromium_002

Notice the negative expires in value. After all segments of the downtime are expired and the downtime is finally over, it won't get cleared in Icinga Web 2. When the transition from the first segment to the second segment happens there shows up a second Downtime, but this one will be cleared on expiration. The Downtime with negative expires in value stays there until I restart the Icinga 2 daemon.

A guess what happens: The Downtime is initially created with the original end time of the first segment, after a while it will be merged with the adjacent segment (you can verify this via the API). But the updated end time does not find it's way into the database which Icinga Web 2 uses. So in Icinga Web 2 we'll see the old end time of the first segment for the Downtime which results into an negative expires in value.

I'm looking forward to your feedback.

@mcktr mcktr added the needs feedback We'll only proceed once we hear from you again label Oct 15, 2018
@efuss
Copy link
Contributor Author

efuss commented Oct 16, 2018 via email

@mcktr
Copy link
Member

mcktr commented Oct 17, 2018

Are you simply not meant to extend a downtime? Is Icinga Web 2 at fault not to re-checking it? Is it simply me needing to call notify_api_of_downtime_change() or something?

There is currently no way to extend an existing downtime, except for remove and re-create it with the new ending time. In terms of configuration objects, such as hosts, services, users, downtimes, etc., Icinga Web 2 relies on the IDO database. The downtime object which you can fetch via the API is current, so updating the API wouldn't help here.

I took pen and paper and draw some pictures to illustrate the problem a bit more.

We have the following ScheduledDowntime object.

object ScheduledDowntime "devel-downtime-001" {
	host_name = "devel-host-001"

	author = "icingaadmin"
	comment = "Some comment"

	ranges = {
		"2018-10-17" = "14:30-14:35,14:35-14:40"
	}
}

The expected behavior is a downtime start notification at 14:30 and a downtime end notification at 14:40.

The current behavior looks like the following:

1-downtime-created

The downtime for the first segment (14:30-14:35) is created. The downtime object is written to the IDO database and exposed via API. A downtime start notification will be sent at 14:30.

2-downtime-extened

The initially created downtime is merged with the adjacent segment (14:35-14:40). This is done by setting the downtime ending time to the ending time of the adjacent segment (14:40).

if (segment.first == current_end) {
	Log(LogDebug, "ScheduledDowntime") << "Next Segment fits, extending end time " << Utility::FormatDateTime("%c", current_end) << " to " << Utility::FormatDateTime("%c", segment.second);
	downtime->SetEndTime(segment.second, false);
	return;

This works for the API but the IDO database never sees the updated ending time.

3-downtime-created

Another downtime is created for the second segment (14:35-14:40). The downtime object is written to the IDO database and exposed via API. A downtime start notification will be sent at 14:35.

result

At the end we have two downtimes and four notifications will be sent out.

Summary:

Extending the downtime ending time by setting it to the end time of the second segment is not sufficient.

  • the downtime object in the IDO database must be updated when merging the second segment

Looking in downtime.cpp we have Downtime::AddDowntime and Downtime::RemoveDowntime implemented, but unfornatlly not Downtime::UpdateDowntime (or simmilar).

  • the downtime object for the second segment must not be created

If a adjacent segment is merged, there shouldn't be an another downtime for the second segment.

Since your first part (put an already running downtime in effect immediately) of the patch works flawlessly could you split up the first part in another PR? This way we can focus here to get the second part working. :)

@efuss
Copy link
Contributor Author

efuss commented Oct 17, 2018 via email

Surpress a mislading debug message stating the next segment won't fit because its start time (The Epoch) didn't match. Instead, log that no next segment exists.
@efuss
Copy link
Contributor Author

efuss commented Oct 18, 2018 via email

@mcktr mcktr changed the title Put running downtimes in effect and merge segments WIP: Merge adjacent downtime segments Oct 19, 2018
@efuss
Copy link
Contributor Author

efuss commented Oct 19, 2018 via email

@efuss
Copy link
Contributor Author

efuss commented Oct 23, 2018 via email

@dnsmichi dnsmichi requested a review from mcktr June 7, 2019 08:48
@dnsmichi
Copy link
Contributor

@efuss Can you please move this into a new issue for better discussion with involved developers and users? The PR unfortunately went stale in this regard. Thanks.

@dnsmichi dnsmichi closed this Nov 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/notifications Notification events bug Something isn't working needs feedback We'll only proceed once we hear from you again
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants