Skip to content

Commit

Permalink
[ML] Merge the Jindex master feature branch (#36702)
Browse files Browse the repository at this point in the history
* [ML] Job and datafeed mappings with index template (#32719)

Index mappings for the configuration documents

* [ML] Job config document CRUD operations (#32738)

* [ML] Datafeed config CRUD operations (#32854)

* [ML] Change JobManager to work with Job config in index  (#33064)

* [ML] Change Datafeed actions to read config from the config index (#33273)

* [ML] Allocate jobs based on JobParams rather than cluster state config (#33994)

* [ML] Return missing job error when .ml-config is does not exist (#34177)

* [ML] Close job in index (#34217)

* [ML] Adjust finalize job action to work with documents (#34226)

* [ML] Job in index: Datafeed node selector (#34218)

* [ML] Job in Index: Stop and preview datafeed (#34605)

* [ML] Delete job document (#34595)

* [ML] Convert job data remover to work with index configs (#34532)

* [ML] Job in index: Get datafeed and job stats from index (#34645)

* [ML] Job in Index: Convert get calendar events to index docs (#34710)

* [ML] Job in index: delete filter action (#34642)

This changes the delete filter action to search
for jobs using the filter to be deleted in the index
rather than the cluster state.

* [ML] Job in Index: Enable integ tests (#34851)

Enables the ml integration tests excluding the rolling upgrade tests and a lot of fixes to
make the tests pass again.

* [ML] Reimplement established model memory (#35500)

This is the 7.0 implementation of a master node service to
keep track of the native process memory requirement of each ML
job with an associated native process.

The new ML memory tracker service works when the whole cluster
is upgraded to at least version 6.6. For mixed version clusters
the old mechanism of established model memory stored on the job
in cluster state was used. This means that the old (and complex)
code to keep established model memory up to date on the job object
has been removed in 7.0.

Forward port of #35263

* [ML] Need to wait for shards to replicate in distributed test (#35541)

Because the cluster was expanded from 1 node to 3 indices would
initially start off with 0 replicas.  If the original node was
killed before auto-expansion to 1 replica was complete then
the test would fail because the indices would be unavailable.

* [ML] DelayedDataCheckConfig index mappings (#35646)

* [ML] JIndex: Restore finalize job action (#35939)

* [ML] Replace Version.CURRENT in streaming functions (#36118)

* [ML] Use 'anomaly-detector' in job config doc name (#36254)

* [ML] Job In Index: Migrate config from the clusterstate (#35834)

Migrate ML configuration from clusterstate to index for closed jobs
only once all nodes are v6.6.0 or higher

* [ML] Check groups against job Ids on update (#36317)

* [ML] Adapt to periodic persistent task refresh (#36633)

* [ML] Adapt to periodic persistent task refresh

If https://github.com/elastic/elasticsearch/pull/36069/files is
merged then the approach for reallocating ML persistent tasks
after refreshing job memory requirements can be simplified.
This change begins the simplification process.

* Remove AwaitsFix and implement TODO

* [ML] Default search size for configs

* Fix TooManyJobsIT.testMultipleNodes

Two problems:

1. Stack overflow during async iteration when lots of
   jobs on same machine
2. Not effectively setting search size in all cases

* Use execute() instead of submit() in MlMemoryTracker

We don't need a Future to wait for completion

* [ML][TEST] Fix NPE in JobManagerTests

* [ML] JIindex: Limit the size of bulk migrations (#36481)

* [ML] Prevent updates and upgrade tests (#36649)

* [FEATURE][ML] Add cluster setting that enables/disables config  migration (#36700)

This commit adds a cluster settings called `xpack.ml.enable_config_migration`.
The setting is `true` by default. When set to `false`, no config migration will
be attempted and non-migrated resources (e.g. jobs, datafeeds) will be able
to be updated normally.

Relates #32905

* [ML] Snapshot ml configs before migrating (#36645)

* [FEATURE][ML] Split in batches and migrate all jobs and datafeeds (#36716)

Relates #32905

* SQL: Fix translation of LIKE/RLIKE keywords (#36672)

* SQL: Fix translation of LIKE/RLIKE keywords

Refactor Like/RLike functions to simplify internals and improve query
 translation when chained or within a script context.

Fix #36039
Fix #36584

* Fixing line length for EnvironmentTests and RecoveryTests (#36657)

Relates #34884

* Add back one line removed by mistake regarding java version check and
COMPAT jvm parameter existence

* Do not resolve addresses in remote connection info (#36671)

The remote connection info API leads to resolving addresses of seed
nodes when invoked. This is problematic because if a hostname fails to
resolve, we would not display any remote connection info. Yet, a
hostname not resolving can happen across remote clusters, especially in
the modern world of cloud services with dynamically chaning
IPs. Instead, the remote connection info API should be providing the
configured seed nodes. This commit changes the remote connection info to
display the configured seed nodes, avoiding a hostname resolution. Note
that care was taken to preserve backwards compatibility with previous
versions that expect the remote connection info to serialize a transport
address instead of a string representing the hostname.

* [Painless] Add boxed type to boxed type casts for method/return (#36571)

This adds implicit boxed type to boxed types casts for non-def types to create asymmetric casting relative to the def type when calling methods or returning values. This means that a user calling a method taking an Integer can call it with a Byte, Short, etc. legally which matches the way def works. This creates consistency in the casting model that did not previously exist.

* SNAPSHOTS: Adjust BwC Versions in Restore Logic (#36718)

* Re-enables bwc tests with adjusted version conditions now that #36397 enables concurrent snapshots in 6.6+

* ingest: fix on_failure with Drop processor (#36686)

This commit allows a document to be dropped when a Drop processor
is used in the on_failure fork of the processor chain.

Fixes #36151

* Initialize startup `CcrRepositories` (#36730)

Currently, the CcrRepositoryManger only listens for settings updates
and installs new repositories. It does not install the repositories that
are in the initial settings. This commit, modifies the manager to
install the initial repositories. Additionally, it modifies the ccr
integration test to configure the remote leader node at startup, instead
of using a settings update.

* [TEST] fix float comparison in RandomObjects#getExpectedParsedValue

This commit fixes a test bug introduced with #36597. This caused some
test failure as stored field values comparisons would not work when CBOR
xcontent type was used.

Closes #29080

* [Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320)

This commit  exposes lucene's LatLonShape field as the
default type in GeoShapeFieldMapper. To use the new 
indexing approach, simply set "type" : "geo_shape" in 
the mappings without setting any of the strategy, precision, 
tree_levels, or distance_error_pct parameters. Note the 
following when using the new indexing approach:

* geo_shape query does not support querying by 
MULTIPOINT.
* LINESTRING and MULTILINESTRING queries do not 
yet support WITHIN relation.
* CONTAINS relation is not yet supported.
The tree, precision, tree_levels, distance_error_pct, 
and points_only parameters are deprecated.

* TESTS:Debug Log. IndexStatsIT#testFilterCacheStats

* ingest: support default pipelines + bulk upserts (#36618)

This commit adds support to enable bulk upserts to use an index's
default pipeline. Bulk upsert, doc_as_upsert, and script_as_upsert
are all supported.

However, bulk script_as_upsert has slightly surprising behavior since
the pipeline is executed _before_ the script is evaluated. This means
that the pipeline only has access the data found in the upsert field
of the script_as_upsert. The non-bulk script_as_upsert (existing behavior)
runs the pipeline _after_ the script is executed. This commit
does _not_ attempt to consolidate the bulk and non-bulk behavior for
script_as_upsert.

This commit also adds additional testing for the non-bulk behavior,
which remains unchanged with this commit.

fixes #36219

* Fix duplicate phrase in shrink/split error message (#36734)

This commit removes a duplicate "must be a" from the shrink/split error
messages.

* Deprecate types in get_source and exist_source (#36426)

This change adds a new untyped endpoint `{index}/_source/{id}` for both the
GET and the HEAD methods to get the source of a document or check for its
existance. It also adds deprecation warnings to RestGetSourceAction that emit
a warning when the old deprecated "type" parameter is still used. Also updating
documentation and tests where appropriate.

Relates to #35190

* Revert "[Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320)"

This reverts commit 5bc7822.

* Enhance Invalidate Token API (#35388)

This change:

- Adds functionality to invalidate all (refresh+access) tokens for all users of a realm
- Adds functionality to invalidate all (refresh+access)tokens for a user in all realms
- Adds functionality to invalidate all (refresh+access) tokens for a user in a specific realm
- Changes the response format for the invalidate token API to contain information about the 
   number of the invalidated tokens and possible errors that were encountered.
- Updates the API Documentation

After back-porting to 6.x, the `created` field will be removed from master as a field in the 
response

Resolves: #35115
Relates: #34556

* Add raw sort values to SearchSortValues transport serialization (#36617)

In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of each `SearchHit`. Sort values are already present but they are formatted according to the provided `DocValueFormat` provided. The CCS node needs to be able to reconstruct the lucene `FieldDoc` to include in the `TopFieldDocs` and `CollapseTopFieldDocs` which will feed the `mergeTopDocs` method used to reduce multiple search responses (one per cluster) into one.

This commit adds such information to the `SearchSortValues` and exposes it through a new getter method added to `SearchHit` for retrieval. This info is only serialized at transport and never printed out at REST.

* Watcher: Ensure all internal search requests count hits (#36697)

In previous commits only the stored toXContent version of a search
request was using the old format. However an executed search request was
already disabling hit counts. In 7.0 hit counts will stay enabled by
default to allow for proper migration.

Closes #36177

* [TEST] Ensure shard follow tasks have really stopped.

Relates to #36696

* Ensure MapperService#getAllMetaFields elements order is deterministic (#36739)

MapperService#getAllMetaFields returns an array, which is created out of
an `ObjectHashSet`. Such set does not guarantee deterministic hash
ordering. The array returned by its toArray may be sorted differently
at each run. This caused some repeatability issues in our tests (see #29080)
as we pick random fields from the array of possible metadata fields,
but that won't be repeatable if the input array is sorted differently at
every run. Once setting the tests seed, hppc picks that up and the sorting is
deterministic, but failures don't repeat with the seed that gets printed out
originally (as a seed was not originally set).
See also https://issues.carrot2.org/projects/HPPC/issues/HPPC-173.

With this commit, we simply create a static sorted array that is used for
`getAllMetaFields`. The change is in production code but really affects
only testing as the only production usage of this method was to iterate
through all values when parsing fields in the high-level REST client code.
Anyways, this seems like a good change as returning an array would imply
that it's deterministically sorted.

* Expose Sequence Number based Optimistic Concurrency Control in the rest layer (#36721)

Relates #36148 
Relates #10708

* [ML] Mute MlDistributedFailureIT
  • Loading branch information
davidkyle authored Dec 18, 2018
1 parent ea9b08d commit e294056
Show file tree
Hide file tree
Showing 149 changed files with 9,341 additions and 3,081 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ public class Job implements ToXContentObject {
public static final ParseField DATA_DESCRIPTION = new ParseField("data_description");
public static final ParseField DESCRIPTION = new ParseField("description");
public static final ParseField FINISHED_TIME = new ParseField("finished_time");
public static final ParseField ESTABLISHED_MODEL_MEMORY = new ParseField("established_model_memory");
public static final ParseField MODEL_PLOT_CONFIG = new ParseField("model_plot_config");
public static final ParseField RENORMALIZATION_WINDOW_DAYS = new ParseField("renormalization_window_days");
public static final ParseField BACKGROUND_PERSIST_INTERVAL = new ParseField("background_persist_interval");
Expand All @@ -84,7 +83,6 @@ public class Job implements ToXContentObject {
(p) -> TimeUtil.parseTimeField(p, FINISHED_TIME.getPreferredName()),
FINISHED_TIME,
ValueType.VALUE);
PARSER.declareLong(Builder::setEstablishedModelMemory, ESTABLISHED_MODEL_MEMORY);
PARSER.declareObject(Builder::setAnalysisConfig, AnalysisConfig.PARSER, ANALYSIS_CONFIG);
PARSER.declareObject(Builder::setAnalysisLimits, AnalysisLimits.PARSER, ANALYSIS_LIMITS);
PARSER.declareObject(Builder::setDataDescription, DataDescription.PARSER, DATA_DESCRIPTION);
Expand All @@ -107,7 +105,6 @@ public class Job implements ToXContentObject {
private final String description;
private final Date createTime;
private final Date finishedTime;
private final Long establishedModelMemory;
private final AnalysisConfig analysisConfig;
private final AnalysisLimits analysisLimits;
private final DataDescription dataDescription;
Expand All @@ -122,7 +119,7 @@ public class Job implements ToXContentObject {
private final Boolean deleting;

private Job(String jobId, String jobType, List<String> groups, String description,
Date createTime, Date finishedTime, Long establishedModelMemory,
Date createTime, Date finishedTime,
AnalysisConfig analysisConfig, AnalysisLimits analysisLimits, DataDescription dataDescription,
ModelPlotConfig modelPlotConfig, Long renormalizationWindowDays, TimeValue backgroundPersistInterval,
Long modelSnapshotRetentionDays, Long resultsRetentionDays, Map<String, Object> customSettings,
Expand All @@ -134,7 +131,6 @@ private Job(String jobId, String jobType, List<String> groups, String descriptio
this.description = description;
this.createTime = createTime;
this.finishedTime = finishedTime;
this.establishedModelMemory = establishedModelMemory;
this.analysisConfig = analysisConfig;
this.analysisLimits = analysisLimits;
this.dataDescription = dataDescription;
Expand Down Expand Up @@ -204,16 +200,6 @@ public Date getFinishedTime() {
return finishedTime;
}

/**
* The established model memory of the job, or <code>null</code> if model
* memory has not reached equilibrium yet.
*
* @return The established model memory of the job
*/
public Long getEstablishedModelMemory() {
return establishedModelMemory;
}

/**
* The analysis configuration object
*
Expand Down Expand Up @@ -306,9 +292,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
builder.timeField(FINISHED_TIME.getPreferredName(), FINISHED_TIME.getPreferredName() + humanReadableSuffix,
finishedTime.getTime());
}
if (establishedModelMemory != null) {
builder.field(ESTABLISHED_MODEL_MEMORY.getPreferredName(), establishedModelMemory);
}
builder.field(ANALYSIS_CONFIG.getPreferredName(), analysisConfig, params);
if (analysisLimits != null) {
builder.field(ANALYSIS_LIMITS.getPreferredName(), analysisLimits, params);
Expand Down Expand Up @@ -364,7 +347,6 @@ public boolean equals(Object other) {
&& Objects.equals(this.description, that.description)
&& Objects.equals(this.createTime, that.createTime)
&& Objects.equals(this.finishedTime, that.finishedTime)
&& Objects.equals(this.establishedModelMemory, that.establishedModelMemory)
&& Objects.equals(this.analysisConfig, that.analysisConfig)
&& Objects.equals(this.analysisLimits, that.analysisLimits)
&& Objects.equals(this.dataDescription, that.dataDescription)
Expand All @@ -381,7 +363,7 @@ public boolean equals(Object other) {

@Override
public int hashCode() {
return Objects.hash(jobId, jobType, groups, description, createTime, finishedTime, establishedModelMemory,
return Objects.hash(jobId, jobType, groups, description, createTime, finishedTime,
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
modelSnapshotId, resultsIndexName, deleting);
Expand All @@ -407,7 +389,6 @@ public static class Builder {
private DataDescription dataDescription;
private Date createTime;
private Date finishedTime;
private Long establishedModelMemory;
private ModelPlotConfig modelPlotConfig;
private Long renormalizationWindowDays;
private TimeValue backgroundPersistInterval;
Expand Down Expand Up @@ -435,7 +416,6 @@ public Builder(Job job) {
this.dataDescription = job.getDataDescription();
this.createTime = job.getCreateTime();
this.finishedTime = job.getFinishedTime();
this.establishedModelMemory = job.getEstablishedModelMemory();
this.modelPlotConfig = job.getModelPlotConfig();
this.renormalizationWindowDays = job.getRenormalizationWindowDays();
this.backgroundPersistInterval = job.getBackgroundPersistInterval();
Expand Down Expand Up @@ -496,11 +476,6 @@ Builder setFinishedTime(Date finishedTime) {
return this;
}

public Builder setEstablishedModelMemory(Long establishedModelMemory) {
this.establishedModelMemory = establishedModelMemory;
return this;
}

public Builder setDataDescription(DataDescription.Builder description) {
dataDescription = Objects.requireNonNull(description, DATA_DESCRIPTION.getPreferredName()).build();
return this;
Expand Down Expand Up @@ -555,7 +530,7 @@ public Job build() {
Objects.requireNonNull(id, "[" + ID.getPreferredName() + "] must not be null");
Objects.requireNonNull(jobType, "[" + JOB_TYPE.getPreferredName() + "] must not be null");
return new Job(
id, jobType, groups, description, createTime, finishedTime, establishedModelMemory,
id, jobType, groups, description, createTime, finishedTime,
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
modelSnapshotId, resultsIndexName, deleting);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,9 +125,6 @@ public static Job.Builder createRandomizedJobBuilder() {
if (randomBoolean()) {
builder.setFinishedTime(new Date(randomNonNegativeLong()));
}
if (randomBoolean()) {
builder.setEstablishedModelMemory(randomNonNegativeLong());
}
builder.setAnalysisConfig(AnalysisConfigTests.createRandomized());
builder.setAnalysisLimits(AnalysisLimitsTests.createRandomized());

Expand Down
5 changes: 0 additions & 5 deletions docs/reference/ml/apis/jobresource.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,6 @@ so do not set the `background_persist_interval` value too low.
`description`::
(string) An optional description of the job.

`established_model_memory`::
(long) The approximate amount of memory resources that have been used for
analytical processing. This field is present only when the analytics have used
a stable amount of memory for several consecutive buckets.

`finished_time`::
(string) If the job closed or failed, this is the time the job finished,
otherwise it is `null`. This property is informational; you cannot change its
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@
import org.elasticsearch.xpack.core.logstash.LogstashFeatureSetUsage;
import org.elasticsearch.xpack.core.ml.MachineLearningFeatureSetUsage;
import org.elasticsearch.xpack.core.ml.MlMetadata;
import org.elasticsearch.xpack.core.ml.MlTasks;
import org.elasticsearch.xpack.core.ml.action.CloseJobAction;
import org.elasticsearch.xpack.core.ml.action.DeleteCalendarAction;
import org.elasticsearch.xpack.core.ml.action.DeleteCalendarEventAction;
Expand Down Expand Up @@ -363,9 +364,9 @@ public List<NamedWriteableRegistry.Entry> getNamedWriteables() {
new NamedWriteableRegistry.Entry(MetaData.Custom.class, "ml", MlMetadata::new),
new NamedWriteableRegistry.Entry(NamedDiff.class, "ml", MlMetadata.MlMetadataDiff::new),
// ML - Persistent action requests
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, StartDatafeedAction.TASK_NAME,
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.DATAFEED_TASK_NAME,
StartDatafeedAction.DatafeedParams::new),
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, OpenJobAction.TASK_NAME,
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.JOB_TASK_NAME,
OpenJobAction.JobParams::new),
// ML - Task states
new NamedWriteableRegistry.Entry(PersistentTaskState.class, JobTaskState.NAME, JobTaskState::new),
Expand Down Expand Up @@ -433,9 +434,9 @@ public List<NamedXContentRegistry.Entry> getNamedXContent() {
new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField("ml"),
parser -> MlMetadata.LENIENT_PARSER.parse(parser, null).build()),
// ML - Persistent action requests
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(StartDatafeedAction.TASK_NAME),
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(MlTasks.DATAFEED_TASK_NAME),
StartDatafeedAction.DatafeedParams::fromXContent),
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(OpenJobAction.TASK_NAME),
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(MlTasks.JOB_TASK_NAME),
OpenJobAction.JobParams::fromXContent),
// ML - Task states
new NamedXContentRegistry.Entry(PersistentTaskState.class, new ParseField(DatafeedState.NAME), DatafeedState::fromXContent),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,6 @@ public final class MlMetaIndex {
*/
public static final String INDEX_NAME = ".ml-meta";

public static final String INCLUDE_TYPE_KEY = "include_type";

public static final String TYPE = "doc";

private MlMetaIndex() {}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
*/
package org.elasticsearch.xpack.core.ml;

import org.elasticsearch.ResourceAlreadyExistsException;
import org.elasticsearch.ResourceNotFoundException;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.AbstractDiffable;
Expand Down Expand Up @@ -146,7 +145,6 @@ public MlMetadata(StreamInput in) throws IOException {
datafeeds.put(in.readString(), new DatafeedConfig(in));
}
this.datafeeds = datafeeds;

this.groupOrJobLookup = new GroupOrJobLookup(jobs.values());
}

Expand All @@ -167,7 +165,7 @@ private static <T extends Writeable> void writeMap(Map<String, T> map, StreamOut
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
DelegatingMapParams extendedParams =
new DelegatingMapParams(Collections.singletonMap(ToXContentParams.FOR_CLUSTER_STATE, "true"), params);
new DelegatingMapParams(Collections.singletonMap(ToXContentParams.FOR_INTERNAL_STORAGE, "true"), params);
mapValuesToXContent(JOBS_FIELD, jobs, builder, extendedParams);
mapValuesToXContent(DATAFEEDS_FIELD, datafeeds, builder, extendedParams);
return builder;
Expand Down Expand Up @@ -196,9 +194,14 @@ public MlMetadataDiff(StreamInput in) throws IOException {
this.jobs = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), Job::new,
MlMetadataDiff::readJobDiffFrom);
this.datafeeds = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), DatafeedConfig::new,
MlMetadataDiff::readSchedulerDiffFrom);
MlMetadataDiff::readDatafeedDiffFrom);
}

/**
* Merge the diff with the ML metadata.
* @param part The current ML metadata.
* @return The new ML metadata.
*/
@Override
public MetaData.Custom apply(MetaData.Custom part) {
TreeMap<String, Job> newJobs = new TreeMap<>(jobs.apply(((MlMetadata) part).jobs));
Expand All @@ -221,7 +224,7 @@ static Diff<Job> readJobDiffFrom(StreamInput in) throws IOException {
return AbstractDiffable.readDiffFrom(Job::new, in);
}

static Diff<DatafeedConfig> readSchedulerDiffFrom(StreamInput in) throws IOException {
static Diff<DatafeedConfig> readDatafeedDiffFrom(StreamInput in) throws IOException {
return AbstractDiffable.readDiffFrom(DatafeedConfig::new, in);
}
}
Expand Down Expand Up @@ -295,7 +298,7 @@ public Builder deleteJob(String jobId, PersistentTasksCustomMetaData tasks) {

public Builder putDatafeed(DatafeedConfig datafeedConfig, Map<String, String> headers) {
if (datafeeds.containsKey(datafeedConfig.getId())) {
throw new ResourceAlreadyExistsException("A datafeed with id [" + datafeedConfig.getId() + "] already exists");
throw ExceptionsHelper.datafeedAlreadyExists(datafeedConfig.getId());
}
String jobId = datafeedConfig.getJobId();
checkJobIsAvailableForDatafeed(jobId);
Expand Down Expand Up @@ -369,14 +372,14 @@ private void checkDatafeedIsStopped(Supplier<String> msg, String datafeedId, Per
}
}

private Builder putJobs(Collection<Job> jobs) {
public Builder putJobs(Collection<Job> jobs) {
for (Job job : jobs) {
putJob(job, true);
}
return this;
}

private Builder putDatafeeds(Collection<DatafeedConfig> datafeeds) {
public Builder putDatafeeds(Collection<DatafeedConfig> datafeeds) {
for (DatafeedConfig datafeed : datafeeds) {
this.datafeeds.put(datafeed.getId(), datafeed);
}
Expand Down Expand Up @@ -421,8 +424,6 @@ void checkJobHasNoDatafeed(String jobId) {
}
}



public static MlMetadata getMlMetadata(ClusterState state) {
MlMetadata mlMetadata = (state == null) ? null : state.getMetaData().custom(TYPE);
if (mlMetadata == null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,19 @@
import org.elasticsearch.xpack.core.ml.job.config.JobState;
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;

import java.util.Collections;
import java.util.List;
import java.util.Set;
import java.util.stream.Collectors;

public final class MlTasks {

public static final String JOB_TASK_NAME = "xpack/ml/job";
public static final String DATAFEED_TASK_NAME = "xpack/ml/datafeed";

private static final String JOB_TASK_ID_PREFIX = "job-";
private static final String DATAFEED_TASK_ID_PREFIX = "datafeed-";

private MlTasks() {
}

Expand All @@ -22,15 +33,15 @@ private MlTasks() {
* A datafeed id can be used as a job id, because they are stored separately in cluster state.
*/
public static String jobTaskId(String jobId) {
return "job-" + jobId;
return JOB_TASK_ID_PREFIX + jobId;
}

/**
* Namespaces the task ids for datafeeds.
* A job id can be used as a datafeed id, because they are stored separately in cluster state.
*/
public static String datafeedTaskId(String datafeedId) {
return "datafeed-" + datafeedId;
return DATAFEED_TASK_ID_PREFIX + datafeedId;
}

@Nullable
Expand Down Expand Up @@ -67,4 +78,64 @@ public static DatafeedState getDatafeedState(String datafeedId, @Nullable Persis
return DatafeedState.STOPPED;
}
}

/**
* The job Ids of anomaly detector job tasks.
* All anomaly detector jobs are returned regardless of the status of the
* task (OPEN, CLOSED, FAILED etc).
*
* @param tasks Persistent tasks. If null an empty set is returned.
* @return The job Ids of anomaly detector job tasks
*/
public static Set<String> openJobIds(@Nullable PersistentTasksCustomMetaData tasks) {
if (tasks == null) {
return Collections.emptySet();
}

return tasks.findTasks(JOB_TASK_NAME, task -> true)
.stream()
.map(t -> t.getId().substring(JOB_TASK_ID_PREFIX.length()))
.collect(Collectors.toSet());
}

/**
* The datafeed Ids of started datafeed tasks
*
* @param tasks Persistent tasks. If null an empty set is returned.
* @return The Ids of running datafeed tasks
*/
public static Set<String> startedDatafeedIds(@Nullable PersistentTasksCustomMetaData tasks) {
if (tasks == null) {
return Collections.emptySet();
}

return tasks.findTasks(DATAFEED_TASK_NAME, task -> true)
.stream()
.map(t -> t.getId().substring(DATAFEED_TASK_ID_PREFIX.length()))
.collect(Collectors.toSet());
}

/**
* Is there an ml anomaly detector job task for the job {@code jobId}?
* @param jobId The job id
* @param tasks Persistent tasks
* @return True if the job has a task
*/
public static boolean taskExistsForJob(String jobId, PersistentTasksCustomMetaData tasks) {
return openJobIds(tasks).contains(jobId);
}

/**
* Read the active anomaly detector job tasks.
* Active tasks are not {@code JobState.CLOSED} or {@code JobState.FAILED}.
*
* @param tasks Persistent tasks
* @return The job tasks excluding closed and failed jobs
*/
public static List<PersistentTasksCustomMetaData.PersistentTask<?>> activeJobTasks(PersistentTasksCustomMetaData tasks) {
return tasks.findTasks(JOB_TASK_NAME, task -> true)
.stream()
.filter(task -> ((JobTaskState) task.getState()).getState().isAnyOf(JobState.CLOSED, JobState.FAILED) == false)
.collect(Collectors.toList());
}
}
Loading

0 comments on commit e294056

Please sign in to comment.