"Fossies" - the Fresh Open Source Software Archive

Member "elasticsearch-6.8.23/docs/reference/index-modules.asciidoc" (29 Dec 2021, 10356 Bytes) of package /linux/www/elasticsearch-6.8.23-src.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming AsciiDoc format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

Analysis

The index analysis module acts as a configurable registry of analyzers that can be used in order to convert a string field into individual terms which are:

  • added to the inverted index in order to make the document searchable

  • used by high level queries such as the match query to generate search terms.

See [analysis] for configuration details.

Index Shard Allocation

This module provides per-index settings to control the allocation of shards to nodes:

Shard Allocation Filtering

Shard allocation filtering allows you to specify which nodes are allowed to host the shards of a particular index.

Note
The per-index shard allocation filters explained below work in conjunction with the cluster-wide allocation filters explained in [shards-allocation].

It is possible to assign arbitrary metadata attributes to each node at startup. For instance, nodes could be assigned a rack and a size attribute as follows:

bin/elasticsearch -Enode.attr.rack=rack1 -Enode.attr.size=big  (1)
  1. These attribute settings can also be specified in the elasticsearch.yml config file.

These metadata attributes can be used with the index.routing.allocation.* settings to allocate an index to a particular group of nodes. For instance, we can move the index test to either big or medium nodes as follows:

PUT test/_settings
{
  "index.routing.allocation.include.size": "big,medium"
}

Alternatively, we can move the index test away from the small nodes with an exclude rule:

PUT test/_settings
{
  "index.routing.allocation.exclude.size": "small"
}

Multiple rules can be specified, in which case all conditions must be satisfied. For instance, we could move the index test to big nodes in rack1 with the following:

PUT test/_settings
{
  "index.routing.allocation.include.size": "big",
  "index.routing.allocation.include.rack": "rack1"
}
Note
If some conditions cannot be satisfied then shards will not be moved.

The following settings are dynamic, allowing live indices to be moved from one set of nodes to another:

index.routing.allocation.include.{attribute}

Assign the index to a node whose {attribute} has at least one of the comma-separated values.

index.routing.allocation.require.{attribute}

Assign the index to a node whose {attribute} has all of the comma-separated values.

index.routing.allocation.exclude.{attribute}

Assign the index to a node whose {attribute} has none of the comma-separated values.

These special attributes are also supported:

_name

Match nodes by node name

_host_ip

Match nodes by host IP address (IP associated with hostname)

_publish_ip

Match nodes by publish IP address

_ip

Match either _host_ip or _publish_ip

_host

Match nodes by hostname

All attribute values can be specified with wildcards, eg:

PUT test/_settings
{
  "index.routing.allocation.include._ip": "192.168.2.*"
}

Delaying allocation when a node leaves

When a node leaves the cluster for whatever reason, intentional or otherwise, the master reacts by:

  • Promoting a replica shard to primary to replace any primaries that were on the node.

  • Allocating replica shards to replace the missing replicas (assuming there are enough nodes).

  • Rebalancing shards evenly across the remaining nodes.

These actions are intended to protect the cluster against data loss by ensuring that every shard is fully replicated as soon as possible.

Even though we throttle concurrent recoveries both at the node level and at the cluster level, this ``shard-shuffle'' can still put a lot of extra load on the cluster which may not be necessary if the missing node is likely to return soon. Imagine this scenario:

  • Node 5 loses network connectivity.

  • The master promotes a replica shard to primary for each primary that was on Node 5.

  • The master allocates new replicas to other nodes in the cluster.

  • Each new replica makes an entire copy of the primary shard across the network.

  • More shards are moved to different nodes to rebalance the cluster.

  • Node 5 returns after a few minutes.

  • The master rebalances the cluster by allocating shards to Node 5.

If the master had just waited for a few minutes, then the missing shards could have been re-allocated to Node 5 with the minimum of network traffic. This process would be even quicker for idle shards (shards not receiving indexing requests) which have been automatically sync-flushed.

The allocation of replica shards which become unassigned because a node has left can be delayed with the index.unassigned.node_left.delayed_timeout dynamic setting, which defaults to 1m.

This setting can be updated on a live index (or on all indices):

PUT _all/_settings
{
  "settings": {
    "index.unassigned.node_left.delayed_timeout": "5m"
  }
}

With delayed allocation enabled, the above scenario changes to look like this:

  • Node 5 loses network connectivity.

  • The master promotes a replica shard to primary for each primary that was on Node 5.

  • The master logs a message that allocation of unassigned shards has been delayed, and for how long.

  • The cluster remains yellow because there are unassigned replica shards.

  • Node 5 returns after a few minutes, before the timeout expires.

  • The missing replicas are re-allocated to Node 5 (and sync-flushed shards recover almost immediately).

Note
This setting will not affect the promotion of replicas to primaries, nor will it affect the assignment of replicas that have not been assigned previously. In particular, delayed allocation does not come into effect after a full cluster restart. Also, in case of a master failover situation, elapsed delay time is forgotten (i.e. reset to the full initial delay).

Cancellation of shard relocation

If delayed allocation times out, the master assigns the missing shards to another node which will start recovery. If the missing node rejoins the cluster, and its shards still have the same sync-id as the primary, shard relocation will be cancelled and the synced shard will be used for recovery instead.

For this reason, the default timeout is set to just one minute: even if shard relocation begins, cancelling recovery in favour of the synced shard is cheap.

Monitoring delayed unassigned shards

The number of shards whose allocation has been delayed by this timeout setting can be viewed with the cluster health API:

GET _cluster/health (1)
  1. This request will return a delayed_unassigned_shards value.

Removing a node permanently

If a node is not going to return and you would like Elasticsearch to allocate the missing shards immediately, just update the timeout to zero:

PUT _all/_settings
{
  "settings": {
    "index.unassigned.node_left.delayed_timeout": "0"
  }
}

You can reset the timeout as soon as the missing shards have started to recover.

Index recovery prioritization

Unallocated shards are recovered in order of priority, whenever possible. Indices are sorted into priority order as follows:

  • the optional index.priority setting (higher before lower)

  • the index creation date (higher before lower)

  • the index name (higher before lower)

This means that, by default, newer indices will be recovered before older indices.

Use the per-index dynamically updatable index.priority setting to customise the index prioritization order. For instance:

PUT index_1

PUT index_2

PUT index_3
{
  "settings": {
    "index.priority": 10
  }
}

PUT index_4
{
  "settings": {
    "index.priority": 5
  }
}

In the above example:

  • index_3 will be recovered first because it has the highest index.priority.

  • index_4 will be recovered next because it has the next highest priority.

  • index_2 will be recovered next because it was created more recently.

  • index_1 will be recovered last.

This setting accepts an integer, and can be updated on a live index with the update index settings API:

PUT index_4/_settings
{
  "index.priority": 1
}

Total Shards Per Node

The cluster-level shard allocator tries to spread the shards of a single index across as many nodes as possible. However, depending on how many shards and indices you have, and how big they are, it may not always be possible to spread shards evenly.

The following dynamic setting allows you to specify a hard limit on the total number of shards from a single index allowed per node:

index.routing.allocation.total_shards_per_node

The maximum number of shards (replicas and primaries) that will be allocated to a single node. Defaults to unbounded.

You can also limit the amount of shards a node can have regardless of the index:

cluster.routing.allocation.total_shards_per_node

The maximum number of shards (replicas and primaries) that will be allocated to a single node globally. Defaults to unbounded (-1).

Warning

These settings impose a hard limit which can result in some shards not being allocated.

Use with caution.

Mapper

The mapper module acts as a registry for the type mapping definitions added to an index either when creating it or by using the put mapping api. It also handles the dynamic mapping support for types that have no explicit mappings pre defined. For more information about mapping definitions, check out the mapping section.

Merge

A shard in Elasticsearch is a Lucene index, and a Lucene index is broken down into segments. Segments are internal storage elements in the index where the index data is stored, and are immutable. Smaller segments are periodically merged into larger segments to keep the index size at bay and to expunge deletes.

The merge process uses auto-throttling to balance the use of hardware resources between merging and other activities like search.

Merge scheduling

The merge scheduler (ConcurrentMergeScheduler) controls the execution of merge operations when they are needed. Merges run in separate threads, and when the maximum number of threads is reached, further merges will wait until a merge thread becomes available.

The merge scheduler supports the following dynamic setting:

index.merge.scheduler.max_thread_count

The maximum number of threads on a single shard that may be merging at once. Defaults to Math.max(1, Math.min(4, Runtime.getRuntime().availableProcessors() / 2)) which works well for a good solid-state-disk (SSD). If your index is on spinning platter drives instead, decrease this to 1.

Similarity module

A similarity (scoring / ranking model) defines how matching documents are scored. Similarity is per field, meaning that via the mapping one can define a different similarity per field.

Configuring a custom similarity is considered an expert feature and the builtin similarities are most likely sufficient as is described in [similarity].

Configuring a similarity

Most existing or custom Similarities have configuration options which can be configured via the index settings as shown below. The index options can be provided when creating an index or updating index settings.

PUT /index
{
    "settings" : {
        "index" : {
            "similarity" : {
              "my_similarity" : {
                "type" : "DFR",
                "basic_model" : "g",
                "after_effect" : "l",
                "normalization" : "h2",
                "normalization.h2.c" : "3.0"
              }
            }
        }
    }
}

Here we configure the DFRSimilarity so it can be referenced as my_similarity in mappings as is illustrate in the below example:

PUT /index/_mapping/_doc
{
  "properties" : {
    "title" : { "type" : "text", "similarity" : "my_similarity" }
  }
}

Available similarities

BM25 similarity (default)

TF/IDF based similarity that has built-in tf normalization and is supposed to work better for short fields (like names). See Okapi_BM25 for more details. This similarity has the following options:

k1

Controls non-linear term frequency normalization (saturation). The default value is 1.2.

b

Controls to what degree document length normalizes tf values. The default value is 0.75.

discount_overlaps

Determines whether overlap tokens (Tokens with 0 position increment) are ignored when computing norm. By default this is true, meaning overlap tokens do not count when computing norms.

Type name: BM25

Classic similarity

deprecated::[6.3.0, "The quality of the produced scores used to rely on coordination factors, which have been removed. It is advised to use BM25 instead."]

The classic similarity that is based on the TF/IDF model. This similarity has the following option:

discount_overlaps

Determines whether overlap tokens (Tokens with 0 position increment) are ignored when computing norm. By default this is true, meaning overlap tokens do not count when computing norms.

Type name: classic

DFR similarity

Similarity that implements the divergence from randomness framework. This similarity has the following options:

basic_model

Possible values: be, d, g, if, in, ine and p.

after_effect

Possible values: no, b and l.

normalization

Possible values: no, h1, h2, h3 and z.

All options but the first option need a normalization value.

Type name: DFR

DFI similarity

Similarity that implements the divergence from independence model. This similarity has the following options:

independence_measure

Possible values standardized, saturated, chisquared.

Type name: DFI

IB similarity.

Information based model . The algorithm is based on the concept that the information content in any symbolic 'distribution' sequence is primarily determined by the repetitive usage of its basic elements. For written texts this challenge would correspond to comparing the writing styles of different authors. This similarity has the following options:

distribution

Possible values: ll and spl.

lambda

Possible values: df and ttf.

normalization

Same as in DFR similarity.

Type name: IB

LM Dirichlet similarity.

LM Dirichlet similarity . This similarity has the following options:

mu

Default to 2000.

Type name: LMDirichlet

LM Jelinek Mercer similarity.

LM Jelinek Mercer similarity . The algorithm attempts to capture important patterns in the text, while leaving out noise. This similarity has the following options:

lambda

The optimal value depends on both the collection and the query. The optimal value is around 0.1 for title queries and 0.7 for long queries. Default to 0.1. When value approaches 0, documents that match more query terms will be ranked higher than those that match fewer terms.

Type name: LMJelinekMercer

Scripted similarity

A similarity that allows you to use a script in order to specify how scores should be computed. For instance, the below example shows how to reimplement TF-IDF:

PUT /index
{
  "settings": {
    "number_of_shards": 1,
    "similarity": {
      "scripted_tfidf": {
        "type": "scripted",
        "script": {
          "source": "double tf = Math.sqrt(doc.freq); double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; double norm = 1/Math.sqrt(doc.length); return query.boost * tf * idf * norm;"
        }
      }
    }
  },
  "mappings": {
    "_doc": {
      "properties": {
        "field": {
          "type": "text",
          "similarity": "scripted_tfidf"
        }
      }
    }
  }
}

PUT /index/_doc/1
{
  "field": "foo bar foo"
}

PUT /index/_doc/2
{
  "field": "bar baz"
}

POST /index/_refresh

GET /index/_search?explain=true
{
  "query": {
    "query_string": {
      "query": "foo^1.7",
      "default_field": "field"
    }
  }
}

Which yields:

{
  "took": 12,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": 1,
    "max_score": 1.9508477,
    "hits": [
      {
        "_shard": "[index][0]",
        "_node": "OzrdjxNtQGaqs4DmioFw9A",
        "_index": "index",
        "_type": "_doc",
        "_id": "1",
        "_score": 1.9508477,
        "_source": {
          "field": "foo bar foo"
        },
        "_explanation": {
          "value": 1.9508477,
          "description": "weight(field:foo in 0) [PerFieldSimilarity], result of:",
          "details": [
            {
              "value": 1.9508477,
              "description": "score from ScriptedSimilarity(weightScript=[null], script=[Script{type=inline, lang='painless', idOrCode='double tf = Math.sqrt(doc.freq); double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; double norm = 1/Math.sqrt(doc.length); return query.boost * tf * idf * norm;', options={}, params={}}]) computed from:",
              "details": [
                {
                  "value": 1.0,
                  "description": "weight",
                  "details": []
                },
                {
                  "value": 1.7,
                  "description": "query.boost",
                  "details": []
                },
                {
                  "value": 2.0,
                  "description": "field.docCount",
                  "details": []
                },
                {
                  "value": 4.0,
                  "description": "field.sumDocFreq",
                  "details": []
                },
                {
                  "value": 5.0,
                  "description": "field.sumTotalTermFreq",
                  "details": []
                },
                {
                  "value": 1.0,
                  "description": "term.docFreq",
                  "details": []
                },
                {
                  "value": 2.0,
                  "description": "term.totalTermFreq",
                  "details": []
                },
                {
                  "value": 2.0,
                  "description": "doc.freq",
                  "details": []
                },
                {
                  "value": 3.0,
                  "description": "doc.length",
                  "details": []
                }
              ]
            }
          ]
        }
      }
    ]
  }
}
Warning
While scripted similarities provide a lot of flexibility, there is a set of rules that they need to satisfy. Failing to do so could make Elasticsearch silently return wrong top hits or fail with internal errors at search time:
  • Returned scores must be positive.

  • All other variables remaining equal, scores must not decrease when doc.freq increases.

  • All other variables remaining equal, scores must not increase when doc.length increases.

You might have noticed that a significant part of the above script depends on statistics that are the same for every document. It is possible to make the above slightly more efficient by providing an weight_script which will compute the document-independent part of the score and will be available under the weight variable. When no weight_script is provided, weight is equal to 1. The weight_script has access to the same variables as the script except doc since it is supposed to compute a document-independent contribution to the score.

The below configuration will give the same tf-idf scores but is slightly more efficient:

PUT /index
{
  "settings": {
    "number_of_shards": 1,
    "similarity": {
      "scripted_tfidf": {
        "type": "scripted",
        "weight_script": {
          "source": "double idf = Math.log((field.docCount+1.0)/(term.docFreq+1.0)) + 1.0; return query.boost * idf;"
        },
        "script": {
          "source": "double tf = Math.sqrt(doc.freq); double norm = 1/Math.sqrt(doc.length); return weight * tf * norm;"
        }
      }
    }
  },
  "mappings": {
    "_doc": {
      "properties": {
        "field": {
          "type": "text",
          "similarity": "scripted_tfidf"
        }
      }
    }
  }
}

Type name: scripted

Default Similarity

By default, Elasticsearch will use whatever similarity is configured as default.

You can change the default similarity for all fields in an index when it is created:

PUT /index
{
  "settings": {
    "index": {
      "similarity": {
        "default": {
          "type": "boolean"
        }
      }
    }
  }
}

If you want to change the default similarity after creating the index you must close your index, send the following request and open it again afterwards:

POST /index/_close

PUT /index/_settings
{
  "index": {
    "similarity": {
      "default": {
        "type": "boolean"
      }
    }
  }
}

POST /index/_open

Slow Log

Search Slow Log

Shard level slow search log allows to log slow search (query and fetch phases) into a dedicated log file.

Thresholds can be set for both the query phase of the execution, and fetch phase, here is a sample:

index.search.slowlog.threshold.query.warn: 10s
index.search.slowlog.threshold.query.info: 5s
index.search.slowlog.threshold.query.debug: 2s
index.search.slowlog.threshold.query.trace: 500ms

index.search.slowlog.threshold.fetch.warn: 1s
index.search.slowlog.threshold.fetch.info: 800ms
index.search.slowlog.threshold.fetch.debug: 500ms
index.search.slowlog.threshold.fetch.trace: 200ms

index.search.slowlog.level: info

All of the above settings are dynamic and can be set for each index using the update indices settings API. For example:

PUT /twitter/_settings
{
    "index.search.slowlog.threshold.query.warn": "10s",
    "index.search.slowlog.threshold.query.info": "5s",
    "index.search.slowlog.threshold.query.debug": "2s",
    "index.search.slowlog.threshold.query.trace": "500ms",
    "index.search.slowlog.threshold.fetch.warn": "1s",
    "index.search.slowlog.threshold.fetch.info": "800ms",
    "index.search.slowlog.threshold.fetch.debug": "500ms",
    "index.search.slowlog.threshold.fetch.trace": "200ms",
    "index.search.slowlog.level": "info"
}

By default, none are enabled (set to -1). Levels (warn, info, debug, trace) allow to control under which logging level the log will be logged. Not all are required to be configured (for example, only warn threshold can be set). The benefit of several levels is the ability to quickly "grep" for specific thresholds breached.

The logging is done on the shard level scope, meaning the execution of a search request within a specific shard. It does not encompass the whole search request, which can be broadcast to several shards in order to execute. Some of the benefits of shard level logging is the association of the actual execution on the specific machine, compared with request level.

The logging file is configured by default using the following configuration (found in log4j2.properties):

appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

Index Slow log

The indexing slow log, similar in functionality to the search slow log. The log file name ends with _index_indexing_slowlog.log. Log and the thresholds are configured in the same way as the search slowlog. Index slowlog sample:

index.indexing.slowlog.threshold.index.warn: 10s
index.indexing.slowlog.threshold.index.info: 5s
index.indexing.slowlog.threshold.index.debug: 2s
index.indexing.slowlog.threshold.index.trace: 500ms
index.indexing.slowlog.level: info
index.indexing.slowlog.source: 1000

All of the above settings are dynamic and can be set for each index using the update indices settings API. For example:

PUT /twitter/_settings
{
    "index.indexing.slowlog.threshold.index.warn": "10s",
    "index.indexing.slowlog.threshold.index.info": "5s",
    "index.indexing.slowlog.threshold.index.debug": "2s",
    "index.indexing.slowlog.threshold.index.trace": "500ms",
    "index.indexing.slowlog.level": "info",
    "index.indexing.slowlog.source": "1000"
}

By default Elasticsearch will log the first 1000 characters of the _source in the slowlog. You can change that with index.indexing.slowlog.source. Setting it to false or 0 will skip logging the source entirely an setting it to true will log the entire source regardless of size. The original _source is reformatted by default to make sure that it fits on a single log line. If preserving the original document format is important, you can turn off reformatting by setting index.indexing.slowlog.reformat to false, which will cause the source to be logged "as is" and can potentially span multiple log lines.

The index slow log file is configured by default in the log4j2.properties file:

appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

Store

The store module allows you to control how index data is stored and accessed on disk.

File system storage types

There are different file system implementations or storage types. By default, Elasticsearch will pick the best implementation based on the operating environment.

This can be overridden for all indices by adding this to the config/elasticsearch.yml file:

index.store.type: niofs

It is a static setting that can be set on a per-index basis at index creation time:

PUT /my_index
{
  "settings": {
    "index.store.type": "niofs"
  }
}
Warning
This is an expert-only setting and may be removed in the future.

The following sections lists all the different storage types supported.

fs

Default file system implementation. This will pick the best implementation depending on the operating environment, which is currently mmapfs on all supported systems but is subject to change.

simplefs

The Simple FS type is a straightforward implementation of file system storage (maps to Lucene SimpleFsDirectory) using a random access file. This implementation has poor concurrent performance (multiple threads will bottleneck). It is usually better to use the niofs when you need index persistence.

niofs

The NIO FS type stores the shard index on the file system (maps to Lucene NIOFSDirectory) using NIO. It allows multiple threads to read from the same file concurrently. It is not recommended on Windows because of a bug in the SUN Java implementation.

mmapfs

The MMap FS type stores the shard index on the file system (maps to Lucene MMapDirectory) by mapping a file into memory (mmap). Memory mapping uses up a portion of the virtual memory address space in your process equal to the size of the file being mapped. Before using this class, be sure you have allowed plenty of virtual address space.

hybridfs

The hybridfs type is a hybrid of niofs and mmapfs, which chooses the best file system type for each type of file based on the read access pattern. Currently only the Lucene term dictionary, norms and doc values files are memory mapped. All other files are opened using Lucene NIOFSDirectory. Similarly to mmapfs be sure you have allowed plenty of virtual address space.

You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

Preloading data into the file system cache

Note
This is an expert setting, the details of which may change in the future.

By default, Elasticsearch completely relies on the operating system file system cache for caching I/O operations. It is possible to set index.store.preload in order to tell the operating system to load the content of hot index files into memory upon opening. This setting accept a comma-separated list of files extensions: all files whose extension is in the list will be pre-loaded upon opening. This can be useful to improve search performance of an index, especially when the host operating system is restarted, since this causes the file system cache to be trashed. However note that this may slow down the opening of indices, as they will only become available after data have been loaded into physical memory.

This setting is best-effort only and may not work at all depending on the store type and host operating system.

The index.store.preload is a static setting that can either be set in the config/elasticsearch.yml:

index.store.preload: ["nvd", "dvd"]

or in the index settings at index creation time:

PUT /my_index
{
  "settings": {
    "index.store.preload": ["nvd", "dvd"]
  }
}

The default value is the empty array, which means that nothing will be loaded into the file-system cache eagerly. For indices that are actively searched, you might want to set it to ["nvd", "dvd"], which will cause norms and doc values to be loaded eagerly into physical memory. These are the two first extensions to look at since Elasticsearch performs random access on them.

A wildcard can be used in order to indicate that all files should be preloaded: index.store.preload: ["*"]. Note however that it is generally not useful to load all files into memory, in particular those for stored fields and term vectors, so a better option might be to set it to ["nvd", "dvd", "tim", "doc", "dim"], which will preload norms, doc values, terms dictionaries, postings lists and points, which are the most important parts of the index for search and aggregations.

Note that this setting can be dangerous on indices that are larger than the size of the main memory of the host, as it would cause the filesystem cache to be trashed upon reopens after large merges, which would make indexing and searching slower.

Translog

Changes to Lucene are only persisted to disk during a Lucene commit, which is a relatively expensive operation and so cannot be performed after every index or delete operation. Changes that happen after one commit and before another will be removed from the index by Lucene in the event of process exit or hardware failure.

Because Lucene commits are too expensive to perform on every individual change, each shard copy also has a transaction log known as its translog associated with it. All index and delete operations are written to the translog after being processed by the internal Lucene index but before they are acknowledged. In the event of a crash, recent transactions that have been acknowledged but not yet included in the last Lucene commit can instead be recovered from the translog when the shard recovers.

An Elasticsearch flush is the process of performing a Lucene commit and starting a new translog. Flushes are performed automatically in the background in order to make sure the translog doesn’t grow too large, which would make replaying its operations take a considerable amount of time during recovery. The ability to perform a flush manually is also exposed through an API, although this is rarely needed.

Translog settings

The data in the translog is only persisted to disk when the translog is fsynced and committed. In the event of a hardware failure or an operating system crash or a JVM crash or a shard failure, any data written since the previous translog commit will be lost.

By default, Elasticsearch fsyncs and commits the translog every 5 seconds if index.translog.durability is set to async or if set to request (default) at the end of every index, delete, update, or bulk request. More precisely, if set to request, Elasticsearch will only report success of an index, delete, update, or bulk request to the client after the translog has been successfully fsynced and committed on the primary and on every allocated replica.

The following dynamically updatable per-index settings control the behaviour of the translog:

index.translog.sync_interval

How often the translog is fsynced to disk and committed, regardless of write operations. Defaults to 5s. Values less than 100ms are not allowed.

index.translog.durability

Whether or not to fsync and commit the translog after every index, delete, update, or bulk request. This setting accepts the following parameters:

request

(default) fsync and commit after every request. In the event of hardware failure, all acknowledged writes will already have been committed to disk.

async

fsync and commit in the background every sync_interval. In the event of a failure, all acknowledged writes since the last automatic commit will be discarded.

index.translog.flush_threshold_size

The translog stores all operations that are not yet safely persisted in Lucene (i.e., are not part of a Lucene commit point). Although these operations are available for reads, they will need to be reindexed if the shard was to shutdown and has to be recovered. This settings controls the maximum total size of these operations, to prevent recoveries from taking too long. Once the maximum size has been reached a flush will happen, generating a new Lucene commit point. Defaults to 512mb.

index.translog.retention.size

The total size of translog files to keep. Keeping more translog files increases the chance of performing an operation based sync when recovering replicas. If the translog files are not sufficient, replica recovery will fall back to a file based sync. Defaults to 512mb

index.translog.retention.age

The maximum duration for which translog files will be kept. Defaults to 12h.

What to do if the translog becomes corrupted?

Warning
This tool is deprecated and will be completely removed in 7.0. Use the elasticsearch-shard tool instead of this one.

In some cases (a bad drive, user error) the translog on a shard copy can become corrupted. When this corruption is detected by Elasticsearch due to mismatching checksums, Elasticsearch will fail that shard copy and refuse to use that copy of the data. If there are other copies of the shard available then Elasticsearch will automatically recover from one of them using the normal shard allocation and recovery mechanism. In particular, if the corrupt shard copy was the primary when the corruption was detected then one of its replicas will be promoted in its place.

If there is no copy of the data from which Elasticsearch can recover successfully, a user may want to recover the data that is part of the shard at the cost of losing the data that is currently contained in the translog. We provide a command-line tool for this, elasticsearch-translog.

Warning
The elasticsearch-translog tool should not be run while Elasticsearch is running. If you attempt to run this tool while Elasticsearch is running, you will permanently lose the documents that were contained only in the translog!

In order to run the elasticsearch-translog tool, specify the truncate subcommand as well as the directory for the corrupted translog with the -d option:

$ bin/elasticsearch-translog truncate -d /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
Checking existing translog files
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!   WARNING: Elasticsearch MUST be stopped before running this tool   !
!                                                                     !
!   WARNING:    Documents inside of translog files will be lost       !
!                                                                     !
!   WARNING:          The following files will be DELETED!            !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-41.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-6.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-37.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-24.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-11.ckp

Continue and DELETE files? [y/N] y
Reading translog UUID information from Lucene commit from shard at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index]
Translog Generation: 3
Translog UUID      : AxqC4rocTC6e0fwsljAh-Q
Removing existing translog files
Creating new empty checkpoint at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog.ckp]
Creating new empty translog at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-3.tlog]
Done.

You can also use the -h option to get a list of all options and parameters that the elasticsearch-translog tool supports.

Index Sorting

When creating a new index in Elasticsearch it is possible to configure how the Segments inside each Shard will be sorted. By default Lucene does not apply any sort. The index.sort.* settings define which fields should be used to sort the documents inside each Segment.

Warning
nested fields are not compatible with index sorting because they rely on the assumption that nested documents are stored in contiguous doc ids, which can be broken by index sorting. An error will be thrown if index sorting is activated on an index that contains nested fields.

For instance the following example shows how to define a sort on a single field:

PUT twitter
{
    "settings" : {
        "index" : {
            "sort.field" : "date", (1)
            "sort.order" : "desc" (2)
        }
    },
    "mappings": {
        "_doc": {
            "properties": {
                "date": {
                    "type": "date"
                }
            }
        }
    }
}
  1. This index is sorted by the date field

  2. …​ in descending order.

It is also possible to sort the index by more than one field:

PUT twitter
{
    "settings" : {
        "index" : {
            "sort.field" : ["username", "date"], (1)
            "sort.order" : ["asc", "desc"] (2)
        }
    },
    "mappings": {
        "_doc": {
            "properties": {
                "username": {
                    "type": "keyword",
                    "doc_values": true
                },
                "date": {
                    "type": "date"
                }
            }
        }
    }
}
  1. This index is sorted by username first then by date

  2. …​ in ascending order for the username field and in descending order for the date field.

Index sorting supports the following settings:

index.sort.field

The list of fields used to sort the index. Only boolean, numeric, date and keyword fields with doc_values are allowed here.

index.sort.order

The sort order to use for each field. The order option can have the following values:

  • asc: For ascending order

  • desc: For descending order.

index.sort.mode

Elasticsearch supports sorting by multi-valued fields. The mode option controls what value is picked to sort the document. The mode option can have the following values:

  • min: Pick the lowest value.

  • max: Pick the highest value.

index.sort.missing

The missing parameter specifies how docs which are missing the field should be treated. The missing value can have the following values:

  • _last: Documents without value for the field are sorted last.

  • _first: Documents without value for the field are sorted first.

Warning
Index sorting can be defined only once at index creation. It is not allowed to add or update a sort on an existing index. Index sorting also has a cost in terms of indexing throughput since documents must be sorted at flush and merge time. You should test the impact on your application before activating this feature.

Early termination of search request

By default in Elasticsearch a search request must visit every document that match a query to retrieve the top documents sorted by a specified sort. Though when the index sort and the search sort are the same it is possible to limit the number of documents that should be visited per segment to retrieve the N top ranked documents globally. For example, let’s say we have an index that contains events sorted by a timestamp field:

PUT events
{
    "settings" : {
        "index" : {
            "sort.field" : "timestamp",
            "sort.order" : "desc" (1)
        }
    },
    "mappings": {
        "doc": {
            "properties": {
                "timestamp": {
                    "type": "date"
                }
            }
        }
    }
}
  1. This index is sorted by timestamp in descending order (most recent first)

You can search for the last 10 events with:

GET /events/_search
{
    "size": 10,
    "sort": [
        { "timestamp": "desc" }
    ]
}

Elasticsearch will detect that the top docs of each segment are already sorted in the index and will only compare the first N documents per segment. The rest of the documents matching the query are collected to count the total number of results and to build aggregations.

If you’re only looking for the last 10 events and have no interest in the total number of documents that match the query you can set track_total_hits to false:

GET /events/_search
{
    "size": 10,
    "sort": [ (1)
        { "timestamp": "desc" }
    ],
    "track_total_hits": false
}
  1. The index sort will be used to rank the top documents and each segment will early terminate the collection after the first 10 matches.

This time, Elasticsearch will not try to count the number of documents and will be able to terminate the query as soon as N documents have been collected per segment.

{
  "_shards": ...
   "hits" : {
      "total" : -1,     (1)
      "max_score" : null,
      "hits" : []
  },
  "took": 20,
  "timed_out": false
}
  1. The total number of hits matching the query is unknown because of early termination.

Note
Aggregations will collect all documents that match the query regardless of the value of track_total_hits

Use index sorting to speed up conjunctions

Index sorting can be useful in order to organize Lucene doc ids (not to be conflated with _id) in a way that makes conjunctions (a AND b AND …​) more efficient. In order to be efficient, conjunctions rely on the fact that if any clause does not match, then the entire conjunction does not match. By using index sorting, we can put documents that do not match together, which will help skip efficiently over large ranges of doc IDs that do not match the conjunction.

This trick only works with low-cardinality fields. A rule of thumb is that you should sort first on fields that both have a low cardinality and are frequently used for filtering. The sort order (asc or desc) does not matter as we only care about putting values that would match the same clauses close to each other.

For instance if you were indexing cars for sale, it might be interesting to sort by fuel type, body type, make, year of registration and finally mileage.