Skip to content

Commit

Permalink
Merge branch 'master' into align-thread-pool-info-cat-api
Browse files Browse the repository at this point in the history
* master:
  Reindex: Fix error in delete-by-query rest spec (elastic#29318)
  Improve similarity integration. (elastic#29187)
  Fix some query extraction bugs. (elastic#29283)
  [Docs] Correct experimental note formatting
  Move Nullable into core (elastic#29341)
  [Docs] Update getting-started.asciidoc (elastic#29294)
  Elasticsearch 6.3.0 is now on Lucene 7.3.
  [DOCS] Refer back to index API for full-document updates in _update API section (elastic#28677)
  Fix missing comma in ingest-node.asciidoc (elastic#29343)
  Improve exception handling on TransportMasterNodeAction (elastic#29314)
  Don't break allocation if resize source index is missing (elastic#29311)
  Use fixture to test repository-s3 plugin (elastic#29296)
  Fix NDCG for empty search results (elastic#29267)
  Pass through script params in scripted metric agg (elastic#29154)
  Fix Eclipse build.
  Upgrade to lucene-7.3.0-snapshot-98a6b3d. (elastic#29298)
  Painless: Remove extraneous INLINE constant. (elastic#29340)
  • Loading branch information
jasontedor committed Apr 3, 2018
2 parents 1112584 + 4db6fc9 commit 5d60e36
Show file tree
Hide file tree
Showing 166 changed files with 2,134 additions and 1,366 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ public class PluginBuildPlugin extends BuildPlugin {
// we "upgrade" these optional deps to provided for plugins, since they will run
// with a full elasticsearch server that includes optional deps
compileOnly "org.locationtech.spatial4j:spatial4j:${project.versions.spatial4j}"
compileOnly "com.vividsolutions:jts:${project.versions.jts}"
compileOnly "org.locationtech.jts:jts-core:${project.versions.jts}"
compileOnly "org.apache.logging.log4j:log4j-api:${project.versions.log4j}"
compileOnly "org.apache.logging.log4j:log4j-core:${project.versions.log4j}"
compileOnly "org.elasticsearch:jna:${project.versions.jna}"
Expand Down
6 changes: 3 additions & 3 deletions buildSrc/version.properties
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
elasticsearch = 7.0.0-alpha1
lucene = 7.2.1
lucene = 7.3.0-snapshot-98a6b3d

# optional dependencies
spatial4j = 0.6
jts = 1.13
spatial4j = 0.7
jts = 1.15.0
jackson = 2.8.10
snakeyaml = 1.17
# when updating log4j, please update also docs/java-api/index.asciidoc
Expand Down
4 changes: 2 additions & 2 deletions docs/Versions.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
:version: 7.0.0-alpha1
:major-version: 7.x
:lucene_version: 7.2.1
:lucene_version_path: 7_2_1
:lucene_version: 7.3.0
:lucene_version_path: 7_3_0
:branch: master
:jdk: 1.8.0_131
:jdk_major: 8
Expand Down
10 changes: 5 additions & 5 deletions docs/java-api/query-dsl/geo-shape-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ to your classpath in order to use this type:
<dependency>
<groupId>org.locationtech.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.6</version> <1>
<version>0.7</version> <1>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>1.13</version> <2>
<groupId>org.locationtech.jts</groupId>
<artifactId>jts-core</artifactId>
<version>1.15.0</version> <2>
<exclusions>
<exclusion>
<groupId>xerces</groupId>
Expand All @@ -28,7 +28,7 @@ to your classpath in order to use this type:
</dependency>
-----------------------------------------------
<1> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.spatial4j%22%20AND%20a%3A%22spatial4j%22[Maven Central]
<2> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.vividsolutions%22%20AND%20a%3A%22jts%22[Maven Central]
<2> check for updates in http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.locationtech.jts%22%20AND%20a%3A%22jts-core%22[Maven Central]

[source,java]
--------------------------------------------------
Expand Down
7 changes: 5 additions & 2 deletions docs/reference/docs/update.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,11 @@ POST test/_doc/1/_update

The update API also support passing a partial document,
which will be merged into the existing document (simple recursive merge,
inner merging of objects, replacing core "keys/values" and arrays). For
example:
inner merging of objects, replacing core "keys/values" and arrays).
To fully replace the existing document, the <<docs-index_,`index` API>> should
be used instead.
The following partial update adds a new field to the
existing document:

[source,js]
--------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -777,7 +777,7 @@ GET /bank/_search
// CONSOLE
// TEST[continued]

The difference here is that instead of passing `q=*` in the URI, we POST a JSON-style query request body to the `_search` API. We'll discuss this JSON query in the next section.
The difference here is that instead of passing `q=*` in the URI, we provide a JSON-style query request body to the `_search` API. We'll discuss this JSON query in the next section.

////
Hidden response just so we can assert that it is indeed the same but don't have
Expand Down
18 changes: 2 additions & 16 deletions docs/reference/index-modules/similarity.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -82,20 +82,6 @@ This similarity has the following options:

Type name: `BM25`

[float]
[[classic-similarity]]
==== Classic similarity

The classic similarity that is based on the TF/IDF model. This
similarity has the following option:

`discount_overlaps`::
Determines whether overlap tokens (Tokens with
0 position increment) are ignored when computing norm. By default this
is true, meaning overlap tokens do not count when computing norms.

Type name: `classic`

[float]
[[dfr]]
==== DFR similarity
Expand Down Expand Up @@ -541,7 +527,7 @@ PUT /index
"index": {
"similarity": {
"default": {
"type": "classic"
"type": "boolean"
}
}
}
Expand All @@ -563,7 +549,7 @@ PUT /index/_settings
"index": {
"similarity": {
"default": {
"type": "classic"
"type": "boolean"
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ingest/ingest-node.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -1341,7 +1341,7 @@ Here is an example of a pipeline specifying custom pattern definitions:
{
"grok": {
"field": "message",
"patterns": ["my %{FAVORITE_DOG:dog} is colored %{RGB:color}"]
"patterns": ["my %{FAVORITE_DOG:dog} is colored %{RGB:color}"],
"pattern_definitions" : {
"FAVORITE_DOG" : "beagle",
"RGB" : "RED|GREEN|BLUE"
Expand Down
9 changes: 2 additions & 7 deletions docs/reference/mapping/params/similarity.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,9 @@ PUT my_index
"default_field": { <1>
"type": "text"
},
"classic_field": {
"type": "text",
"similarity": "classic" <2>
},
"boolean_sim_field": {
"type": "text",
"similarity": "boolean" <3>
"similarity": "boolean" <2>
}
}
}
Expand All @@ -59,5 +55,4 @@ PUT my_index
--------------------------------------------------
// CONSOLE
<1> The `default_field` uses the `BM25` similarity.
<2> The `classic_field` uses the `classic` similarity (ie TF/IDF).
<3> The `boolean_sim_field` uses the `boolean` similarity.
<2> The `boolean_sim_field` uses the `boolean` similarity.
13 changes: 13 additions & 0 deletions docs/reference/migration/migrate_7_0/mappings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,16 @@ the index setting `index.mapping.nested_objects.limit`.
==== The `update_all_types` option has been removed

This option is useless now that all indices have at most one type.

=== The `classic` similarity has been removed

The `classic` similarity relied on coordination factors for scoring to be good
in presence of stopwords in the query. This feature has been removed from
Lucene, which means that the `classic` similarity now produces scores of lower
quality. It is advised to switch to `BM25` instead, which is widely accepted
as a better alternative.

=== Similarities fail when unsupported options are provided

An error will now be thrown when unknown configuration options are provided
to similarities. Such unknown parameters were ignored before.
4 changes: 1 addition & 3 deletions docs/reference/search/rank-eval.asciidoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
[[search-rank-eval]]
== Ranking Evaluation API

experimental[The ranking evaluation API is experimental and may be changed or removed completely in a future release,
as well as change in non-backwards compatible ways on minor versions updates. Elastic will take a best effort
approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.]
experimental[The ranking evaluation API is experimental and may be changed or removed completely in a future release, as well as change in non-backwards compatible ways on minor versions updates. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.]

The ranking evaluation API allows to evaluate the quality of ranked search
results over a set of typical search queries. Given this set of queries and a
Expand Down
3 changes: 3 additions & 0 deletions libs/x-content/src/main/eclipse-build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@

// this is just shell gradle file for eclipse to have separate projects for secure-sm src and tests
apply from: '../../build.gradle'
7 changes: 7 additions & 0 deletions libs/x-content/src/test/eclipse-build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

// this is just shell gradle file for eclipse to have separate projects for secure-sm src and tests
apply from: '../../build.gradle'

dependencies {
testCompile project(':libs:x-content')
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AbstractTokenFilterFactory;
import org.elasticsearch.index.analysis.MultiTermAwareComponent;

public class TrimTokenFilterFactory extends AbstractTokenFilterFactory {
public class TrimTokenFilterFactory extends AbstractTokenFilterFactory implements MultiTermAwareComponent {

private static final String UPDATE_OFFSETS_KEY = "update_offsets";

Expand All @@ -41,4 +42,9 @@ public class TrimTokenFilterFactory extends AbstractTokenFilterFactory {
public TokenStream create(TokenStream tokenStream) {
return new TrimFilter(tokenStream);
}

@Override
public Object getMultiTermComponent() {
return this;
}
}

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
38ff5a1f4bcbfb6e1ffacd3263175c2a1ba23e9f
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,17 @@
public final class Location {
private final String sourceName;
private final int offset;

/**
* Create a new Location
* Create a new Location
* @param sourceName script's name
* @param offset character offset of script element
*/
public Location(String sourceName, int offset) {
this.sourceName = Objects.requireNonNull(sourceName);
this.offset = offset;
}

/**
* Return the script's name
*/
Expand Down Expand Up @@ -68,43 +68,31 @@ public RuntimeException createError(RuntimeException exception) {

// This maximum length is theoretically 65535 bytes, but as it's CESU-8 encoded we don't know how large it is in bytes, so be safe
private static final int MAX_NAME_LENGTH = 256;

/** Computes the file name (mostly important for stacktraces) */
public static String computeSourceName(String scriptName, String source) {
public static String computeSourceName(String scriptName) {
StringBuilder fileName = new StringBuilder();
if (scriptName.equals(PainlessScriptEngine.INLINE_NAME)) {
// its an anonymous script, include at least a portion of the source to help identify which one it is
// but don't create stacktraces with filenames that contain newlines or huge names.
// its an anonymous script, include at least a portion of the source to help identify which one it is
// but don't create stacktraces with filenames that contain newlines or huge names.

// truncate to the first newline
int limit = source.indexOf('\n');
if (limit >= 0) {
int limit2 = source.indexOf('\r');
if (limit2 >= 0) {
limit = Math.min(limit, limit2);
}
} else {
limit = source.length();
// truncate to the first newline
int limit = scriptName.indexOf('\n');
if (limit >= 0) {
int limit2 = scriptName.indexOf('\r');
if (limit2 >= 0) {
limit = Math.min(limit, limit2);
}
} else {
limit = scriptName.length();
}

// truncate to our limit
limit = Math.min(limit, MAX_NAME_LENGTH);
fileName.append(source, 0, limit);
// truncate to our limit
limit = Math.min(limit, MAX_NAME_LENGTH);
fileName.append(scriptName, 0, limit);

// if we truncated, make it obvious
if (limit != source.length()) {
fileName.append(" ...");
}
fileName.append(" @ <inline script>");
} else {
// its a named script, just use the name
// but don't trust this has a reasonable length!
if (scriptName.length() > MAX_NAME_LENGTH) {
fileName.append(scriptName, 0, MAX_NAME_LENGTH);
fileName.append(" ...");
} else {
fileName.append(scriptName);
}
// if we truncated, make it obvious
if (limit != scriptName.length()) {
fileName.append(" ...");
}
return fileName.toString();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,14 +91,7 @@ default ScriptException convertToScriptException(Throwable t, Map<String, List<S
scriptStack.add(element.toString());
}
}
// build a name for the script:
final String name;
if (PainlessScriptEngine.INLINE_NAME.equals(getName())) {
name = getSource();
} else {
name = getName();
}
ScriptException scriptException = new ScriptException("runtime error", t, scriptStack, name, PainlessScriptEngine.NAME);
ScriptException scriptException = new ScriptException("runtime error", t, scriptStack, getName(), PainlessScriptEngine.NAME);
for (Map.Entry<String, List<String>> entry : extraMetadata.entrySet()) {
scriptException.addMetadata(entry.getKey(), entry.getValue());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -119,11 +119,6 @@ public String getType() {
return NAME;
}

/**
* When a script is anonymous (inline), we give it this name.
*/
static final String INLINE_NAME = "<inline>";

@Override
public <T> T compile(String scriptName, String scriptSource, ScriptContext<T> context, Map<String, String> params) {
Compiler compiler = contextsToCompilers.get(context);
Expand Down Expand Up @@ -425,7 +420,7 @@ public Loader run() {
return AccessController.doPrivileged(new PrivilegedAction<Object>() {
@Override
public Object run() {
String name = scriptName == null ? INLINE_NAME : scriptName;
String name = scriptName == null ? source : scriptName;
Constructor<?> constructor = compiler.compile(loader, new MainMethodReserved(), name, source, compilerSettings);

try {
Expand Down Expand Up @@ -488,7 +483,7 @@ void compile(Compiler compiler, Loader loader, MainMethodReserved reserved,
AccessController.doPrivileged(new PrivilegedAction<Void>() {
@Override
public Void run() {
String name = scriptName == null ? INLINE_NAME : scriptName;
String name = scriptName == null ? source : scriptName;
compiler.compile(loader, reserved, name, source, compilerSettings);

return null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ private Walker(ScriptClassInfo scriptClassInfo, MainMethodReserved reserved, Str
this.reserved.push(reserved);
this.debugStream = debugStream;
this.settings = settings;
this.sourceName = Location.computeSourceName(sourceName, sourceText);
this.sourceName = Location.computeSourceName(sourceName);
this.sourceText = sourceText;
this.globals = new Globals(new BitSet(sourceText.length()));
this.definition = definition;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ public void write() {
}
visitor.visit(WriterConstants.CLASS_VERSION, classAccess, className, null,
Type.getType(scriptClassInfo.getBaseClass()).getInternalName(), classInterfaces);
visitor.visitSource(Location.computeSourceName(name, source), null);
visitor.visitSource(Location.computeSourceName(name), null);

// Write the a method to bootstrap def calls
MethodWriter bootstrapDef = new MethodWriter(Opcodes.ACC_STATIC | Opcodes.ACC_VARARGS, DEF_BOOTSTRAP_METHOD, visitor,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -336,9 +336,7 @@ public void testNonDefaultSimilarity() throws Exception {
hasChildQuery(CHILD_DOC, new TermQueryBuilder("custom_string", "value"), ScoreMode.None);
HasChildQueryBuilder.LateParsingQuery query = (HasChildQueryBuilder.LateParsingQuery) hasChildQueryBuilder.toQuery(shardContext);
Similarity expected = SimilarityService.BUILT_IN.get(similarity)
.create(similarity, Settings.EMPTY,
Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(), null)
.get();
.apply(Settings.EMPTY, Version.CURRENT, null);
assertThat(((PerFieldSimilarityWrapper) query.getSimilarity()).get("custom_string"), instanceOf(expected.getClass()));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ protected Collection<Class<? extends Plugin>> getPlugins() {

@Override
protected void initializeAdditionalMappings(MapperService mapperService) throws IOException {
similarity = randomFrom("classic", "BM25");
similarity = randomFrom("boolean", "BM25");
// TODO: use a single type when inner hits have been changed to work with join field,
// this test randomly generates queries with inner hits
mapperService.merge(PARENT_TYPE, new CompressedXContent(Strings.toString(PutMappingRequest.buildFromSimplifiedDef(PARENT_TYPE,
Expand Down Expand Up @@ -323,9 +323,7 @@ public void testNonDefaultSimilarity() throws Exception {
hasChildQuery(CHILD_TYPE, new TermQueryBuilder("custom_string", "value"), ScoreMode.None);
HasChildQueryBuilder.LateParsingQuery query = (HasChildQueryBuilder.LateParsingQuery) hasChildQueryBuilder.toQuery(shardContext);
Similarity expected = SimilarityService.BUILT_IN.get(similarity)
.create(similarity, Settings.EMPTY,
Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(), null)
.get();
.apply(Settings.EMPTY, Version.CURRENT, null);
assertThat(((PerFieldSimilarityWrapper) query.getSimilarity()).get("custom_string"), instanceOf(expected.getClass()));
}

Expand Down
Loading

0 comments on commit 5d60e36

Please sign in to comment.