Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added origins of pipeline configurations #11123

Merged

Conversation

andsel
Copy link
Contributor

@andsel andsel commented Sep 6, 2019

Added origins of pipeline configurations, for example config string when configuration is passed with -e on command line or the paths of config files used.

Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work @andsel - just missing a few config. sources and I have a question on whether you think we might be outputting this information too regularly (particularly for customers who are using complex configurations with many config files)

"config string"
when "file"
config_part.id
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some sources missing here, which lead to this value being set to nil - monitoring pipeline, central pipeline management and modules


pipeline_sources = []
pipeline_config.config_parts.each do |config_part|
pipeline_sources <<
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing some source types - managed pipelines, monitoring pipelines and modules

@@ -472,6 +472,20 @@ def maybe_setup_out_plugins
def default_logging_keys(other_keys = {})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if default_logging_keys is necessarily the right place for this - this is logged in 16 different places in this file. Maybe we could use it more selectively (on startup, on failure?)

@andsel
Copy link
Contributor Author

andsel commented Sep 9, 2019

@robbavey fixed your suggestions. I log the pipeline source only at the startup of a pipeline, moved the code to extract the pipeline source from PipelineConfig into pipelines' root AbstractPipelineExt.

Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting very close! If we can get some tests in too, that would be awesome

"pipeline.workers" => pipeline_workers,
"pipeline.batch.size" => batch_size,
"pipeline.batch.delay" => batch_delay,
"pipeline.max_inflight" => max_inflight))
"pipeline.max_inflight" => max_inflight)
pipeline_log_params["pipeline.sources"] = pipeline_source_details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add this to

logger.error("Pipeline aborted due to error", default_logging_keys(:exception => e, :backtrace => e.backtrace))

@@ -636,6 +639,20 @@ def maybe_setup_out_plugins
def default_logging_keys(other_keys = {})
keys = super
keys[:thread] ||= thread.inspect if thread

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be removed

"pipeline.batch.size" => settings.get("pipeline.batch.size"),
"pipeline.batch.delay" => settings.get("pipeline.batch.delay"))

pipeline_log_params["pipeline.sources"] = pipeline_source_details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add this to

@logger.error("Pipeline aborted due to error", default_logging_keys(:exception => e, :backtrace => e.backtrace))

case "x-pack-config-management":
pipelineSources.add(RubyString.newString(context.runtime, "central pipeline management"));
break;
case "module":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally I think we might want the name of the module here, but it might not be too important as we should be able to glean it from elsewhere

@andsel andsel requested a review from robbavey September 10, 2019 14:31
Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really close to getting this through! A couple of nitpicks around unnecessary code, and an ask for an extra test and we should be good to go!

@@ -225,11 +227,14 @@ def start_workers
config_metric.gauge(:graph, ::LogStash::Config::LIRSerializer.serialize(lir))
config_metric.gauge(:cluster_uuids, resolve_cluster_uuids)

@logger.info("Starting pipeline", default_logging_keys(
pipeline_log_params = default_logging_keys(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: No need for the extra assignment:

Suggested change
pipeline_log_params = default_logging_keys(
pipeline_log_params = default_logging_keys(
"pipeline.workers" => pipeline_workers,
"pipeline.batch.size" => batch_size,
"pipeline.batch.delay" => batch_delay,
"pipeline.max_inflight" => max_inflight))
"pipeline.max_inflight" => max_inflight,
"pipeline.sources" => pipeline_source_details)
@logger.info("Starting pipeline", default_logging_keys)

"pipeline.workers" => settings.get("pipeline.workers"),
"pipeline.batch.size" => settings.get("pipeline.batch.size"),
"pipeline.batch.delay" => settings.get("pipeline.batch.delay")))
pipeline_log_params = default_logging_keys(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Again no need for separate assignment

@@ -180,7 +182,9 @@ def start
@finished_run.make_true
rescue => e
close
@logger.error("Pipeline aborted due to error", default_logging_keys(:exception => e, :backtrace => e.backtrace))
pipeline_log_params = default_logging_keys(:exception => e, :backtrace => e.backtrace)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit again, no need for extra assignment, add the extra field to the default_logging_keys method call

plainlog_file = "#{temp_dir}/logstash-plain.log"
expect(File.exists?(plainlog_file)).to be true
expect(IO.read(plainlog_file) =~ /Starting pipeline.*"pipeline.sources"=>\["config string"\]/).to be > 0
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also add a test for logstash starting via a config file as well as config string? I think that is probably where this functionality has the most utility.

@andsel
Copy link
Contributor Author

andsel commented Sep 10, 2019

thanks @robbavey for precious suggestions, now it should be fine

Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple more things - I think we should be able to improve the test flakiness and a nitpick on a leftover puts statement in the test, and then happy to see this merged

"pipeline.id" => pipeline_name
}
IO.write(@ls.application_settings_file, settings.to_yaml)
puts "initial_config_file: #{initial_config_file}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for puts statement

IO.write(@ls.application_settings_file, settings.to_yaml)
puts "initial_config_file: #{initial_config_file}"
@ls.spawn_logstash("-w", "1", "-f", "#{initial_config_file}")
@ls.wait_for_logstash
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line might be causing the build flakiness. as Logstash comes up and down before the call to the API endpoint can be completed. Removing this line from this (and the other logging tests) might help

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robbavey fixed this bringing code to check LS process termination

Copy link
Member

@robbavey robbavey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…s of config files used, module).

closes 9630
@andsel andsel force-pushed the feature/add_pipeline_configuration_origins branch from 34b5880 to 0e622cc Compare September 12, 2019 14:04
@andsel andsel merged commit 0e622cc into elastic:master Sep 12, 2019
@jsvd
Copy link
Member

jsvd commented Sep 25, 2019

@andsel @robbavey should we also get this into 7.x (7.5.0)?

@robbavey
Copy link
Member

@jsvd - yes we should

@andsel
Copy link
Contributor Author

andsel commented Sep 30, 2019

this is already present in branch 7.x (1cbaeeb) has to be ported also to 7.5?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants