Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preconfigured word_delimiter_graph token filter has incorrect adjust_offsets default #43621

Closed
romseygeek opened this issue Jun 26, 2019 · 1 comment
Assignees
Labels
>bug :Search Relevance/Analysis How text is split into tokens Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch

Comments

@romseygeek
Copy link
Contributor

The standard factory and docs both have adjust_offsets as false, but the preconfigured filter has it set to true.

@romseygeek romseygeek added >bug :Search Relevance/Analysis How text is split into tokens labels Jun 26, 2019
@romseygeek romseygeek self-assigned this Jun 26, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search

romseygeek added a commit to romseygeek/elasticsearch that referenced this issue Jun 26, 2019
romseygeek added a commit that referenced this issue Jun 27, 2019
When a named token filter or char filter is passed as part of an Analyze API
request with no index, we currently try and build the relevant filter using no
index settings. However, this can miss cases where there is a pre-configured
filter defined in the analysis registry. One example here is the elision filter, which
has a pre-configured version built with the french elision set; when used as part
of normal analysis, this preconfigured set is used, but when used as part of the
Analyze API we end up with NPEs because it tries to instantiate the filter with
no index settings.

This commit changes the Analyze API to check for pre-configured filters in the case
that the request has no index defined, and is using a name rather than a custom
definition for a filter.

It also changes the pre-configured `word_delimiter_graph` filter and `edge_ngram`
tokenizer to make their settings consistent with the defaults used when creating
them with no settings

Closes #43002
Closes #43621
Closes #43582
@javanna javanna added the Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch label Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Search Relevance/Analysis How text is split into tokens Team:Search Relevance Meta label for the Search Relevance team in Elasticsearch
Projects
None yet
Development

No branches or pull requests

3 participants