Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add limits for ngram and shingle settings #27211

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,8 @@ public final class IndexScopedSettings extends AbstractScopedSettings {
IndexSettings.MAX_INNER_RESULT_WINDOW_SETTING,
IndexSettings.MAX_DOCVALUE_FIELDS_SEARCH_SETTING,
IndexSettings.MAX_SCRIPT_FIELDS_SETTING,
IndexSettings.MAX_NGRAM_DIFF_SETTING,
IndexSettings.MAX_SHINGLE_DIFF_SETTING,
IndexSettings.MAX_RESCORE_WINDOW_SETTING,
IndexSettings.MAX_ADJACENCY_MATRIX_FILTERS_SETTING,
IndexSettings.INDEX_TRANSLOG_SYNC_INTERVAL_SETTING,
Expand Down Expand Up @@ -150,6 +152,7 @@ public final class IndexScopedSettings extends AbstractScopedSettings {
EngineConfig.INDEX_CODEC_SETTING,
EngineConfig.INDEX_OPTIMIZE_AUTO_GENERATED_IDS,
IndexMetaData.SETTING_WAIT_FOR_ACTIVE_SHARDS,

// validate that built-in similarities don't get redefined
Setting.groupSetting("index.similarity.", (s) -> {
Map<String, Settings> groups = s.getAsGroups();
Expand Down
40 changes: 40 additions & 0 deletions core/src/main/java/org/elasticsearch/index/IndexSettings.java
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,26 @@ public final class IndexSettings {
public static final Setting<Integer> MAX_SCRIPT_FIELDS_SETTING =
Setting.intSetting("index.max_script_fields", 32, 0, Property.Dynamic, Property.IndexScope);

/**
* Index setting describing for NGramTokenizer and NGramTokenFilter
* the maximum difference between
* max_gram (maximum length of characters in a gram) and
* min_gram (minimum length of characters in a gram).
* The default value is 1 as this is default difference in NGramTokenizer,
* and is defensive as it prevents generating too many index terms.
*/
public static final Setting<Integer> MAX_NGRAM_DIFF_SETTING =
Setting.intSetting("index.max_ngram_diff", 1, 0, Property.Dynamic, Property.IndexScope);

/**
* Index setting describing for ShingleTokenFilter
* the maximum difference between
* max_shingle_size and min_shingle_size.
* The default value is 3 is defensive as it prevents generating too many tokens.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes too many tokens is problematic. I also don't like that we build invalid boolean queries when the diff is greater than 0. For instance with unigram set to true and shingles of size 2, the input foo bar foobar is analyzed as:
position 1: (foo_bar, foo), position 2: (bar, bar_foobar) ... which is problematic for any position query.
3 seems reasonable especially for fields that don't index positions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jimczi thanks for your comment. I don't think I understand how diff in shingle size greater that 0 makes boolean queries invalid. Does it mean that if we have tokens: {"token" : "foo_bar", "position" : 1}, {"token" : "foo", "position" : 1}, {"token" : "bar", "position" : 2}, {"token" : "bar_foobar", "position" : 2},, we can't have a phrase query say: "foo bar"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

foo bar would return the correct document but it would build an invalid phrase query:

"(foo_bar foo) bar"

... trying to find document with foo_bar bar as a phrase query which could be simplified in foo_bar. For boolean query it would not consider that foo_bar is enough to match foo AND bar so the bigram would be useless for matching this type of query. We have solutions on the query side to build the correct query (foo_bar OR (foo AND bar)) but this is currently deactivated because it can explode the number of clauses in the query if you have a large diff between min and max.
For simplicity I prefer when a single shingle size is used for a shingle field. In this case a phrase query like "foo bar foobar baz" could be optimized to "foo_bar foobar_baz" with shingles of size 2 (bar_foobar can be safely ignored since we know that the previous match can only be foo_bar).

public static final Setting<Integer> MAX_SHINGLE_DIFF_SETTING =
Setting.intSetting("index.max_shingle_diff", 3, 0, Property.Dynamic, Property.IndexScope);

/**
* Index setting describing the maximum value of allowed `docvalue_fields`that can be retrieved
* per search request. The default maximum of 100 is defensive for the reason that retrieving
Expand Down Expand Up @@ -239,6 +259,8 @@ public final class IndexSettings {
private volatile int maxRescoreWindow;
private volatile int maxDocvalueFields;
private volatile int maxScriptFields;
private volatile int maxNgramDiff;
private volatile int maxShingleDiff;
private volatile boolean TTLPurgeDisabled;
/**
* The maximum number of refresh listeners allows on this shard.
Expand Down Expand Up @@ -342,6 +364,8 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti
maxRescoreWindow = scopedSettings.get(MAX_RESCORE_WINDOW_SETTING);
maxDocvalueFields = scopedSettings.get(MAX_DOCVALUE_FIELDS_SEARCH_SETTING);
maxScriptFields = scopedSettings.get(MAX_SCRIPT_FIELDS_SETTING);
maxNgramDiff = scopedSettings.get(MAX_NGRAM_DIFF_SETTING);
maxShingleDiff = scopedSettings.get(MAX_SHINGLE_DIFF_SETTING);
TTLPurgeDisabled = scopedSettings.get(INDEX_TTL_DISABLE_PURGE_SETTING);
maxRefreshListeners = scopedSettings.get(MAX_REFRESH_LISTENERS_PER_SHARD);
maxSlicesPerScroll = scopedSettings.get(MAX_SLICES_PER_SCROLL);
Expand Down Expand Up @@ -373,6 +397,8 @@ public IndexSettings(final IndexMetaData indexMetaData, final Settings nodeSetti
scopedSettings.addSettingsUpdateConsumer(MAX_RESCORE_WINDOW_SETTING, this::setMaxRescoreWindow);
scopedSettings.addSettingsUpdateConsumer(MAX_DOCVALUE_FIELDS_SEARCH_SETTING, this::setMaxDocvalueFields);
scopedSettings.addSettingsUpdateConsumer(MAX_SCRIPT_FIELDS_SETTING, this::setMaxScriptFields);
scopedSettings.addSettingsUpdateConsumer(MAX_NGRAM_DIFF_SETTING, this::setMaxNgramDiff);
scopedSettings.addSettingsUpdateConsumer(MAX_SHINGLE_DIFF_SETTING, this::setMaxShingleDiff);
scopedSettings.addSettingsUpdateConsumer(INDEX_WARMER_ENABLED_SETTING, this::setEnableWarmer);
scopedSettings.addSettingsUpdateConsumer(INDEX_GC_DELETES_SETTING, this::setGCDeletes);
scopedSettings.addSettingsUpdateConsumer(INDEX_TRANSLOG_FLUSH_THRESHOLD_SIZE_SETTING, this::setTranslogFlushThresholdSize);
Expand Down Expand Up @@ -641,6 +667,20 @@ private void setMaxDocvalueFields(int maxDocvalueFields) {
this.maxDocvalueFields = maxDocvalueFields;
}

/**
* Returns the maximum allowed difference between max and min length of ngram
*/
public int getMaxNgramDiff() { return this.maxNgramDiff; }

private void setMaxNgramDiff(int maxNgramDiff) { this.maxNgramDiff = maxNgramDiff; }

/**
* Returns the maximum allowed difference between max and min shingle_size
*/
public int getMaxShingleDiff() { return this.maxShingleDiff; }

private void setMaxShingleDiff(int maxShingleDiff) { this.maxShingleDiff = maxShingleDiff; }

/**
* Returns the maximum number of allowed script_fields to retrieve in a search request
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@

import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.ngram.NGramTokenizer;
import org.elasticsearch.Version;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
Expand Down Expand Up @@ -84,8 +85,21 @@ static CharMatcher parseTokenChars(List<String> characterClasses) {

public NGramTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
int maxAllowedNgramDiff = indexSettings.getMaxNgramDiff();
this.minGram = settings.getAsInt("min_gram", NGramTokenizer.DEFAULT_MIN_NGRAM_SIZE);
this.maxGram = settings.getAsInt("max_gram", NGramTokenizer.DEFAULT_MAX_NGRAM_SIZE);
int ngramDiff = maxGram - minGram;
if (ngramDiff > maxAllowedNgramDiff) {
if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) {
throw new IllegalArgumentException(
"The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to: ["
+ maxAllowedNgramDiff + "] but was [" + ngramDiff + "]. This limit can be set by changing the ["
+ IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey() + "] index level setting.");
} else {
deprecationLogger.deprecated("Deprecated big difference between max_gram and min_gram in NGram Tokenizer,"
+ "expected difference must be less than or equal to: [" + maxAllowedNgramDiff + "]");
}
}
this.matcher = parseTokenChars(settings.getAsList("token_chars"));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;
import org.apache.lucene.analysis.shingle.ShingleFilter;
import org.elasticsearch.Version;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
Expand All @@ -32,9 +33,24 @@ public class ShingleTokenFilterFactory extends AbstractTokenFilterFactory {

public ShingleTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
int maxAllowedShingleDiff = indexSettings.getMaxShingleDiff();
Integer maxShingleSize = settings.getAsInt("max_shingle_size", ShingleFilter.DEFAULT_MAX_SHINGLE_SIZE);
Integer minShingleSize = settings.getAsInt("min_shingle_size", ShingleFilter.DEFAULT_MIN_SHINGLE_SIZE);
Boolean outputUnigrams = settings.getAsBoolean("output_unigrams", true);

int shingleDiff = maxShingleSize - minShingleSize + (outputUnigrams ? 1 : 0);
if (shingleDiff > maxAllowedShingleDiff) {
if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) {
throw new IllegalArgumentException(
"In Shingle TokenFilter the difference between max_shingle_size and min_shingle_size (and +1 if outputting unigrams)"
+ " must be less than or equal to: [" + maxAllowedShingleDiff + "] but was [" + shingleDiff + "]. This limit"
+ " can be set by changing the [" + IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey() + "] index level setting.");
} else {
deprecationLogger.deprecated("Deprecated big difference between maxShingleSize and minShingleSize in Shingle TokenFilter,"
+ "expected difference must be less than or equal to: [" + maxAllowedShingleDiff + "]");
}
}

Boolean outputUnigramsIfNoShingles = settings.getAsBoolean("output_unigrams_if_no_shingles", false);
String tokenSeparator = settings.get("token_separator", ShingleFilter.DEFAULT_TOKEN_SEPARATOR);
String fillerToken = settings.get("filler_token", ShingleFilter.DEFAULT_FILLER_TOKEN);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.core.WhitespaceTokenizer;
import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.ESTokenStreamTestCase;

Expand Down Expand Up @@ -102,4 +103,25 @@ public void testDisableGraph() throws IOException {
assertFalse(stream.hasAttribute(DisableGraphAttribute.class));
}
}

/*`
* test that throws an error when trying to get a ShingleTokenFilter where difference between max_shingle_size and min_shingle_size
* is greater than the allowed value of max_shingle_diff
*/
public void testMaxShingleDiffException() throws Exception{
String RESOURCE2 = "/org/elasticsearch/index/analysis/shingle_analysis2.json";
int maxAllowedShingleDiff = 3;
int shingleDiff = 8;
try {
ESTestCase.TestAnalysis analysis = AnalysisTestsHelper.createTestAnalysisFromClassPath(createTempDir(), RESOURCE2);
analysis.tokenFilter.get("shingle");
fail();
} catch (IllegalArgumentException ex) {
assertEquals(
"In Shingle TokenFilter the difference between max_shingle_size and min_shingle_size (and +1 if outputting unigrams)"
+ " must be less than or equal to: [" + maxAllowedShingleDiff + "] but was [" + shingleDiff + "]. This limit"
+ " can be set by changing the [" + IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey() + "] index level setting.",
ex.getMessage());
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.MultiMatchQueryBuilder;
Expand Down Expand Up @@ -1802,6 +1803,7 @@ public void testSearchEmptyDoc() {
public void testNGramCopyField() {
CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder()
.put(indexSettings())
.put(IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey(), 9)
.put("index.analysis.analyzer.my_ngram_analyzer.type", "custom")
.put("index.analysis.analyzer.my_ngram_analyzer.tokenizer", "my_ngram_tokenizer")
.put("index.analysis.tokenizer.my_ngram_tokenizer.type", "nGram")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.ScriptPlugin;
import org.elasticsearch.script.ScriptContext;
Expand Down Expand Up @@ -683,6 +684,7 @@ public void testDifferentShardSize() throws Exception {
public void testShardFailures() throws IOException, InterruptedException {
CreateIndexRequestBuilder builder = prepareCreate("test").setSettings(Settings.builder()
.put(indexSettings())
.put(IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey(), 4)
.put("index.analysis.analyzer.suggest.tokenizer", "standard")
.putList("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler")
.put("index.analysis.filter.shingler.type", "shingle")
Expand Down Expand Up @@ -743,6 +745,7 @@ public void testEmptyShards() throws IOException, InterruptedException {
endObject();
assertAcked(prepareCreate("test").setSettings(Settings.builder()
.put(indexSettings())
.put(IndexSettings.MAX_SHINGLE_DIFF_SETTING.getKey(), 4)
.put("index.analysis.analyzer.suggest.tokenizer", "standard")
.putList("index.analysis.analyzer.suggest.filter", "standard", "lowercase", "shingler")
.put("index.analysis.filter.shingler.type", "shingle")
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"index":{
"analysis":{
"filter":{
"shingle_filler":{
"type":"shingle",
"max_shingle_size" : 10,
"min_shingle_size" : 2,
"output_unigrams" : false,
"filler_token" : "FILLER"
}
}
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,6 @@ type:
|`max_gram` |Defaults to `2`.
|============================

The index level setting `index.max_ngram_diff` controls the maximum allowed
difference between `max_gram` and `min_gram`.

Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,5 @@ used if the position increment is greater than one when a `stop` filter is used
together with the `shingle` filter. Defaults to `"_"`
|=======================================================================

The index level setting `index.max_shingle_diff` controls the maximum allowed
difference between `max_shingle_size` and `min_shingle_size`.
3 changes: 3 additions & 0 deletions docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,9 @@ value. The smaller the length, the more documents will match but the lower
the quality of the matches. The longer the length, the more specific the
matches. A tri-gram (length `3`) is a good place to start.

The index level setting `index.max_ngram_diff` controls the maximum allowed
difference between `max_gram` and `min_gram`.

[float]
=== Example configuration

Expand Down
10 changes: 10 additions & 0 deletions docs/reference/index-modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,16 @@ specific index module:
The maximum number of `script_fields` that are allowed in a query.
Defaults to `32`.

`index.max_ngram_diff`::

The maximum allowed difference between min_gram and max_gram for NGramTokenizer and NGramTokenFilter.
Defaults to `1`.

`index.max_shingle_diff`::

The maximum allowed difference between max_shingle_size and min_shingle_size for ShingleTokenFilter.
Defaults to `3`.

`index.blocks.read_only`::

Set to `true` to make the index and index metadata read only, `false` to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AbstractTokenFilterFactory;
import org.elasticsearch.Version;



public class NGramTokenFilterFactory extends AbstractTokenFilterFactory {
Expand All @@ -36,8 +38,21 @@ public class NGramTokenFilterFactory extends AbstractTokenFilterFactory {

NGramTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
super(indexSettings, name, settings);
int maxAllowedNgramDiff = indexSettings.getMaxNgramDiff();
this.minGram = settings.getAsInt("min_gram", NGramTokenFilter.DEFAULT_MIN_NGRAM_SIZE);
this.maxGram = settings.getAsInt("max_gram", NGramTokenFilter.DEFAULT_MAX_NGRAM_SIZE);
int ngramDiff = maxGram - minGram;
if (ngramDiff > maxAllowedNgramDiff) {
if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_7_0_0_alpha1)) {
throw new IllegalArgumentException(
"The difference between max_gram and min_gram in NGram Tokenizer must be less than or equal to: ["
+ maxAllowedNgramDiff + "] but was [" + ngramDiff + "]. This limit can be set by changing the ["
+ IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey() + "] index level setting.");
} else {
deprecationLogger.deprecated("Deprecated big difference between max_gram and min_gram in NGram Tokenizer,"
+ "expected difference must be less than or equal to: [" + maxAllowedNgramDiff + "]");
}
}
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@

import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.query.Operator;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
Expand Down Expand Up @@ -66,6 +67,7 @@ public void testNgramHighlightingWithBrokenPositions() throws IOException {
.endObject())
.setSettings(Settings.builder()
.put(indexSettings())
.put(IndexSettings.MAX_NGRAM_DIFF_SETTING.getKey(), 19)
.put("analysis.tokenizer.autocomplete.max_gram", 20)
.put("analysis.tokenizer.autocomplete.min_gram", 1)
.put("analysis.tokenizer.autocomplete.token_chars", "letter,digit")
Expand Down
Loading