Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to a Lucene 8 snapshot #33310

Merged
merged 96 commits into from
Sep 6, 2018
Merged
Show file tree
Hide file tree
Changes from 88 commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
4e9a6fc
initial commit to upgrade to lucene-8.0.0 snapshot
jimczi Aug 9, 2018
dfa1568
Apply renaming to TermContext (->TermStates) and LevensteinDistance (…
jimczi Aug 16, 2018
4985646
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 16, 2018
cfa8c35
Handle SimWeight removal.
jpountz Aug 17, 2018
55ab015
Handle the removal of basic models be, d and p, and after effect no.
jpountz Aug 17, 2018
3f54c9e
Docs enhancement: added reference to cluster-level setting `search.de…
markharwood Aug 16, 2018
f05caf7
[DOCS] Clarify sentence in network-host.asciidoc (#32429)
datosh Aug 16, 2018
07dedde
Test: Fix unpredictive merges in DocumentSubsetReaderTests
jimczi Aug 16, 2018
06a40a1
[ML] Choose seconds to fix intermittent DatafeeedConfigTest failure
davidkyle Aug 16, 2018
f66e5bb
CharArraysTests: Fix test bug.
jpountz Aug 16, 2018
227b14d
Mutes test in DuelScrollIT
colings86 Aug 16, 2018
73d8a5d
AwaitFix AckIT.
jpountz Aug 16, 2018
f9a6bdf
Remove passphrase support from reload settings API (#32889)
jasontedor Aug 16, 2018
364c9f6
[DOCS] Update WordPress plugins links (#32194)
HazemKhaled Aug 16, 2018
2d056a2
HLRC: adding machine learning delete job (#32820)
benwtrent Aug 16, 2018
65a1888
[Test] Fix DuelScrollIT#testDuelIndexOrderQueryThenFetch
jimczi Aug 16, 2018
e657402
AwaitFix FullClusterRestartIT#testRollupIDSchemeAfterRestart.
jpountz Aug 16, 2018
9b73dda
Painless: Special Case def (#32871)
jdconrad Aug 16, 2018
71eb39b
Fix docs for fixed filename for heap dump path (#32882)
jasontedor Aug 16, 2018
28b5ce5
Temporarily disabled ML BWC tests for backporting
edsavage Aug 16, 2018
b5ae8ff
Re enable ml bwc tests (#32916)
edsavage Aug 16, 2018
42e03c5
Guard against null in email admin watches (#32923)
jasontedor Aug 16, 2018
f7a861c
For filters aggs, make sure that rewrites preserve other_bucket. (#32…
jtibshirani Aug 17, 2018
4fb240a
Security: remove put privilege API (#32879)
jaymode Aug 17, 2018
043a767
RFC: Test that example plugins build stand-alone (#32235)
alpar-t Aug 17, 2018
7e14119
remove StandardFilter
jimczi Aug 17, 2018
2a078ae
Remove imports of EarlyTerminatingSortingCollector.
jpountz Aug 17, 2018
5f38426
boolean needsScores -> ScoreMode scoreMode.
jpountz Aug 17, 2018
c00973b
Remove usage of createNormalizedWeight.
jpountz Aug 17, 2018
eccfc79
Fix (Edge)NGramTokenFilter call sites.
jpountz Aug 17, 2018
2175f3a
ENGLISH_STOP_WORDS_SET
jpountz Aug 17, 2018
10e2ad4
Fix more compile errors.
jpountz Aug 17, 2018
d39c4ba
Fix TopScoreDocCollector factory method calls.
jpountz Aug 17, 2018
e3c5d0d
Fix compile errors in the Lucene helper class.
jpountz Aug 20, 2018
bd6b0c7
Fix more compile errors related to TopDocs-related changes.
jpountz Aug 20, 2018
4d724b0
Fix more compile errors.
jpountz Aug 20, 2018
e2be29c
more Levenstein => Levenshtein
jimczi Aug 20, 2018
946312b
Convert BoostingQueryBuilder to use FunctionScoreQuery.boostByQuery
jimczi Aug 20, 2018
250f160
Fix collapsing topdocs (merge and max score)
jimczi Aug 20, 2018
09ac9f0
remove unused import
jimczi Aug 20, 2018
7fb6393
More compile errors.
jpountz Aug 20, 2018
abfe196
More compile errors.
jpountz Aug 21, 2018
a841413
Add TODO.
jpountz Aug 21, 2018
7f46a18
Implement tracking of scores as a sub-fetch phase rather than a step …
jpountz Aug 21, 2018
a509758
Fix compile errors in TopHitsAggregator.
jpountz Aug 21, 2018
ecda651
More compile errors.
jpountz Aug 21, 2018
32563c2
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 24, 2018
03d13b9
fix compilation and style
jimczi Aug 24, 2018
2684d88
updateShas
jimczi Aug 24, 2018
aca51ec
more checkstyle fixes
jimczi Aug 24, 2018
b267965
standard filter removal
jimczi Aug 24, 2018
4dc5a05
more wrong assert on topDocs.totalHits
jimczi Aug 24, 2018
0806f52
dismax tiebreaker must be <= 1
jimczi Aug 24, 2018
df0cf40
consistently omit freqs in test
jimczi Aug 24, 2018
34455fa
remove deprecation tests
jimczi Aug 24, 2018
dcebf3a
replace empty TermStats with null
jimczi Aug 24, 2018
d047238
remove more standard token filter refs
jimczi Aug 24, 2018
de503ed
Change the way that nested docs are excluded.
jpountz Aug 24, 2018
8a6505d
Make ScriptedSimilarityTests pass.
jpountz Aug 24, 2018
4951f58
Update Google Cloud Storage Library for Java (#32940)
tlrx Aug 24, 2018
cbd9236
Muted testEmptyAuthorizedIndicesSearchForAllDisallowNoIndices
astefan Aug 24, 2018
febf169
[Rollup] Move getMetadata() methods out of rollup config objects (#32…
tlrx Aug 24, 2018
91ceb08
Muted testListenersThrowingExceptionsDoNotCauseOtherListenersToBeSkipped
astefan Aug 24, 2018
77282e8
Add hook to skip asserting x-content equivalence (#33114)
jasontedor Aug 24, 2018
a8941a9
Fix race condition in scheduler engine test
jasontedor Aug 24, 2018
675760e
[Rollup] Move toAggCap() methods out of rollup config objects (#32583)
tlrx Aug 24, 2018
9824bf9
Revert "Do NOT allow termvectors on nested fields (#32728)"
mayya-sharipova Aug 24, 2018
0d0927e
[Test] Fix sporadic failure in MembershipActionTests
jimczi Aug 24, 2018
848ea39
fix initial value for MaxScoreCollector
jimczi Aug 24, 2018
bc955cd
unused import
jimczi Aug 24, 2018
89dc9e3
adapt to new explanation
jimczi Aug 24, 2018
0b76d68
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 24, 2018
bf39efa
fix default score for search hits
jimczi Aug 27, 2018
7655727
fix expectation in test
jimczi Aug 28, 2018
2786379
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 29, 2018
b85116a
fix negative script score in test
jimczi Aug 29, 2018
b6507fe
adapt chinese analyzer bwc
jimczi Aug 29, 2018
f4a44bf
adapt docs
jimczi Aug 29, 2018
46e8c25
fix explanation expectation in tests
jimczi Aug 29, 2018
6269744
ensure positive scores in function score tests
jimczi Aug 30, 2018
d8a370c
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 30, 2018
c2313b1
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 30, 2018
f827b7a
Merge branch 'master' into upgrade/lucene-8
jimczi Aug 31, 2018
8d9db07
check style
jimczi Aug 31, 2018
595c529
awaitsfix some tests
jimczi Aug 31, 2018
e935c0e
upgrade to lucene-8.0.0-snapshot-4d78db26be
jimczi Aug 31, 2018
bef0c14
fix compil after upgrade to a new snapshot
jimczi Aug 31, 2018
36ae5e2
Merge branch 'master' into upgrade/lucene-8
jimczi Sep 3, 2018
67efaae
handle bwc for the standard filter
jimczi Sep 5, 2018
e32f54c
fix test
jimczi Sep 5, 2018
6bae99d
add standard filter removal to redirects
jimczi Sep 5, 2018
dc04a50
styl
jimczi Sep 5, 2018
647b846
index statistics are always positive numbers
jimczi Sep 5, 2018
8002bce
Merge branch 'master' into upgrade/lucene-8
jimczi Sep 5, 2018
0cc44a9
unused import
jimczi Sep 5, 2018
9a6d074
fix test for standard filter bwc
jimczi Sep 5, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -539,9 +539,9 @@ class BuildPlugin implements Plugin<Project> {
from generatePOMTask.destination
into "${project.buildDir}/distributions"
rename {
generatePOMTask.ext.pomFileName == null ?
"${project.archivesBaseName}-${project.version}.pom" :
generatePOMTask.ext.pomFileName
generatePOMTask.ext.pomFileName == null ?
"${project.archivesBaseName}-${project.version}.pom" :
generatePOMTask.ext.pomFileName
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion buildSrc/version.properties
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
elasticsearch = 7.0.0-alpha1
lucene = 7.5.0-snapshot-13b9e28f9d
lucene = 8.0.0-snapshot-4d78db26be

# optional dependencies
spatial4j = 0.7
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1174,10 +1174,10 @@ static Request xPackInfo(XPackInfoRequest infoRequest) {
static Request xPackGraphExplore(GraphExploreRequest exploreRequest) throws IOException {
String endpoint = endpoint(exploreRequest.indices(), exploreRequest.types(), "_xpack/graph/_explore");
Request request = new Request(HttpGet.METHOD_NAME, endpoint);
request.setEntity(createEntity(exploreRequest, REQUEST_BODY_CONTENT_TYPE));
request.setEntity(createEntity(exploreRequest, REQUEST_BODY_CONTENT_TYPE));
return request;
}
}

static Request xPackWatcherPutWatch(PutWatchRequest putWatchRequest) {
String endpoint = new EndpointBuilder()
.addPathPartAsIs("_xpack")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2720,7 +2720,7 @@ public void testXPackPutWatch() throws Exception {
request.getEntity().writeTo(bos);
assertThat(bos.toString("UTF-8"), is(body));
}

public void testGraphExplore() throws Exception {
Map<String, String> expectedParams = new HashMap<>();

Expand Down Expand Up @@ -2748,7 +2748,7 @@ public void testGraphExplore() throws Exception {
assertEquals(expectedParams, request.getParameters());
assertThat(request.getEntity().getContentType().getValue(), is(XContentType.JSON.mediaTypeWithoutParameters()));
assertToXContentBody(graphExploreRequest, request.getEntity());
}
}

public void testXPackDeleteWatch() {
DeleteWatchRequest deleteWatchRequest = new DeleteWatchRequest();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1034,7 +1034,7 @@ public void testExplain() throws IOException {
assertTrue(explainResponse.isExists());
assertTrue(explainResponse.isMatch());
assertTrue(explainResponse.hasExplanation());
assertThat(explainResponse.getExplanation().getValue(), greaterThan(0.0f));
assertThat(explainResponse.getExplanation().getValue().floatValue(), greaterThan(0.0f));
assertNull(explainResponse.getGetResult());
}
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

import joptsimple.OptionSet;
import joptsimple.OptionSpec;
import org.apache.lucene.search.spell.LevensteinDistance;
import org.apache.lucene.search.spell.LevenshteinDistance;
import org.apache.lucene.util.CollectionUtil;
import org.bouncycastle.bcpg.ArmoredInputStream;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
Expand Down Expand Up @@ -355,7 +355,7 @@ boolean urlExists(Terminal terminal, String urlString) throws IOException {

/** Returns all the official plugin names that look similar to pluginId. **/
private List<String> checkMisspelledPlugin(String pluginId) {
LevensteinDistance ld = new LevensteinDistance();
LevenshteinDistance ld = new LevenshteinDistance();
List<Tuple<Float, String>> scoredKeys = new ArrayList<>();
for (String officialPlugin : OFFICIAL_PLUGINS) {
float distance = ld.getDistance(pluginId, officialPlugin);
Expand Down
4 changes: 2 additions & 2 deletions docs/Versions.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
:version: 7.0.0-alpha1
:major-version: 7.x
:lucene_version: 7.5.0
:lucene_version_path: 7_5_0
:lucene_version: 8.0.0
:lucene_version_path: 8_0_0
:branch: master
:jdk: 1.8.0_131
:jdk_major: 8
Expand Down
1 change: 0 additions & 1 deletion docs/plugins/analysis-phonetic.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ PUT phonetic_sample
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"my_metaphone"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,7 @@ Top hits response snippet with a nested hit, which resides in the first slot of
"by_nested": {
"hits": {
"total": 1,
"max_score": 0.2876821,
"max_score": 0.3616575,
"hits": [
{
"_index": "sales",
Expand All @@ -330,7 +330,7 @@ Top hits response snippet with a nested hit, which resides in the first slot of
"field": "comments", <1>
"offset": 0 <2>
},
"_score": 0.2876821,
"_score": 0.3616575,
"_source": {
"comment": "This car could have better brakes", <3>
"username": "baddriver007"
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/analysis/analyzers/standard-analyzer.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,6 @@ Tokenizer::
* <<analysis-standard-tokenizer,Standard Tokenizer>>

Token Filters::
* <<analysis-standard-tokenfilter,Standard Token Filter>>
* <<analysis-lowercase-tokenfilter,Lower Case Token Filter>>
* <<analysis-stop-tokenfilter,Stop Token Filter>> (disabled by default)

Expand All @@ -292,7 +291,6 @@ PUT /standard_example
"rebuilt_standard": {
"tokenizer": "standard",
"filter": [
"standard",
"lowercase" <1>
]
}
Expand Down
2 changes: 0 additions & 2 deletions docs/reference/analysis/tokenfilters.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ or add tokens (eg synonyms).
Elasticsearch has a number of built in token filters which can be
used to build <<analysis-custom-analyzer,custom analyzers>>.

include::tokenfilters/standard-tokenfilter.asciidoc[]

include::tokenfilters/asciifolding-tokenfilter.asciidoc[]

include::tokenfilters/flatten-graph-tokenfilter.asciidoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ PUT /asciifold_example
"analyzer" : {
"default" : {
"tokenizer" : "standard",
"filter" : ["standard", "asciifolding"]
"filter" : ["asciifolding"]
}
}
}
Expand All @@ -37,7 +37,7 @@ PUT /asciifold_example
"analyzer" : {
"default" : {
"tokenizer" : "standard",
"filter" : ["standard", "my_ascii_folding"]
"filter" : ["my_ascii_folding"]
}
},
"filter" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ PUT /elision_example
"analyzer" : {
"default" : {
"tokenizer" : "standard",
"filter" : ["standard", "elision"]
"filter" : ["elision"]
}
},
"filter" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ PUT /keep_types_example
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "extract_numbers"]
"filter" : ["lowercase", "extract_numbers"]
}
},
"filter" : {
Expand Down Expand Up @@ -87,7 +87,7 @@ PUT /keep_types_exclude_example
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "remove_numbers"]
"filter" : ["lowercase", "remove_numbers"]
}
},
"filter" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,11 @@ PUT /keep_words_example
"analyzer" : {
"example_1" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "words_till_three"]
"filter" : ["lowercase", "words_till_three"]
},
"example_2" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "words_in_file"]
"filter" : ["lowercase", "words_in_file"]
}
},
"filter" : {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ PUT /my_index
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "my_snow"]
"filter" : ["lowercase", "my_snow"]
}
},
"filter" : {
Expand Down
15 changes: 0 additions & 15 deletions docs/reference/analysis/tokenfilters/standard-tokenfilter.asciidoc

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ PUT /my_index
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["standard", "lowercase", "my_stemmer"]
"filter" : ["lowercase", "my_stemmer"]
}
},
"filter" : {
Expand Down
8 changes: 4 additions & 4 deletions docs/reference/how-to/recipes/stemming.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -143,13 +143,13 @@ GET index/_search
},
"hits": {
"total": 1,
"max_score": 0.80259144,
"max_score": 0.8025915,
"hits": [
{
"_index": "index",
"_type": "_doc",
"_id": "1",
"_score": 0.80259144,
"_score": 0.8025915,
"_source": {
"body": "Ski resort"
}
Expand Down Expand Up @@ -200,13 +200,13 @@ GET index/_search
},
"hits": {
"total": 1,
"max_score": 0.80259144,
"max_score": 0.8025915,
"hits": [
{
"_index": "index",
"_type": "_doc",
"_id": "1",
"_score": 0.80259144,
"_score": 0.8025915,
"_source": {
"body": "Ski resort"
}
Expand Down
24 changes: 12 additions & 12 deletions docs/reference/index-modules/similarity.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -295,27 +295,27 @@ Which yields:
"details": []
},
{
"value": 2.0,
"value": 2,
"description": "field.docCount",
"details": []
},
{
"value": 4.0,
"value": 4,
"description": "field.sumDocFreq",
"details": []
},
{
"value": 5.0,
"value": 5,
"description": "field.sumTotalTermFreq",
"details": []
},
{
"value": 1.0,
"value": 1,
"description": "term.docFreq",
"details": []
},
{
"value": 2.0,
"value": 2,
"description": "term.totalTermFreq",
"details": []
},
Expand All @@ -325,7 +325,7 @@ Which yields:
"details": []
},
{
"value": 3.0,
"value": 3,
"description": "doc.length",
"details": []
}
Expand Down Expand Up @@ -469,27 +469,27 @@ GET /index/_search?explain=true
"details": []
},
{
"value": 2.0,
"value": 2,
"description": "field.docCount",
"details": []
},
{
"value": 4.0,
"value": 4,
"description": "field.sumDocFreq",
"details": []
},
{
"value": 5.0,
"value": 5,
"description": "field.sumTotalTermFreq",
"details": []
},
{
"value": 1.0,
"value": 1,
"description": "term.docFreq",
"details": []
},
{
"value": 2.0,
"value": 2,
"description": "term.totalTermFreq",
"details": []
},
Expand All @@ -499,7 +499,7 @@ GET /index/_search?explain=true
"details": []
},
{
"value": 3.0,
"value": 3,
"description": "doc.length",
"details": []
}
Expand Down
3 changes: 0 additions & 3 deletions docs/reference/mapping/types/percolator.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,6 @@ PUT my_queries1
"type": "custom",
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"wildcard_edge_ngram"
]
Expand Down Expand Up @@ -597,7 +596,6 @@ PUT my_queries2
"type": "custom",
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"reverse",
"wildcard_edge_ngram"
Expand All @@ -607,7 +605,6 @@ PUT my_queries2
"type": "custom",
"tokenizer": "standard",
"filter": [
"standard",
"lowercase",
"reverse"
]
Expand Down
4 changes: 4 additions & 0 deletions docs/reference/migration/migrate_7_0/analysis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,7 @@ The `delimited_payload_filter` was deprecated and renamed to `delimited_payload`
Using it in indices created before 7.0 will issue deprecation warnings. Using the old
name in new indices created in 7.0 will throw an error. Use the new name `delimited_payload`
instead.

==== `standard` filter has been removed

The `standard` token filter has been removed because it doesn't change anything in the stream.
Loading