"Fossies" - the Fresh Open Source Software Archive

Member "elasticsearch-6.8.23/docs/plugins/repository-azure.asciidoc" (29 Dec 2021, 7492 Bytes) of package /linux/www/elasticsearch-6.8.23-src.tar.gz:

As a special service "Fossies" has tried to format the requested source page into HTML format (assuming AsciiDoc format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

Azure Repository Plugin

The Azure Repository plugin adds support for using Azure as a repository for {ref}/modules-snapshots.html[Snapshot/Restore].


This plugin can be installed using the plugin manager:

sudo bin/elasticsearch-plugin install repository-azure

The plugin must be installed on every node in the cluster, and each node must be restarted after installation.

This plugin can be downloaded for offline install from {plugin_url}/repository-azure/repository-azure-{version}.zip.


The plugin can be removed with the following command:

sudo bin/elasticsearch-plugin remove repository-azure

The node must be stopped before removing the plugin.

Azure Repository

To enable Azure repositories, you have first to define your azure storage settings as {ref}/secure-settings.html[secure settings], before starting up the node:

bin/elasticsearch-keystore add azure.client.default.account
bin/elasticsearch-keystore add azure.client.default.key

Where account is the azure account name and key the azure secret key. Instead of an azure secret key under key, you can alternatively define a shared access signatures (SAS) token under sas_token to use for authentication instead. When using an SAS token instead of an account key, the SAS token must have read (r), write (w), list (l), and delete (d) permissions for the repository base path and all its contents. These permissions need to be granted for the blob service (b) and apply to resource types service (s), container (c), and object (o). These settings are used by the repository’s internal azure client.

Note that you can also define more than one account:

bin/elasticsearch-keystore add azure.client.default.account
bin/elasticsearch-keystore add azure.client.default.key
bin/elasticsearch-keystore add azure.client.secondary.account
bin/elasticsearch-keystore add azure.client.secondary.sas_token

default is the default account name which will be used by a repository, unless you set an explicit one in the repository settings.

The account, key, and sas_token storage settings are {ref}/secure-settings.html#reloadable-secure-settings[reloadable]. After you reload the settings, the internal azure clients, which are used to transfer the snapshot, will utilize the latest settings from the keystore.

In progress snapshot/restore jobs will not be preempted by a reload of the storage secure settings. They will complete using the client as it was built when the operation started.

You can set the client side timeout to use when making any single request. It can be defined globally, per account or both. It’s not set by default which means that Elasticsearch is using the default value set by the azure client (known as 5 minutes).

max_retries can help to control the exponential backoff policy. It will fix the number of retries in case of failures before considering the snapshot is failing. Defaults to 3 retries. The initial backoff period is defined by Azure SDK as 30s. Which means 30s of wait time before retrying after a first timeout or failure. The maximum backoff period is defined by Azure SDK as 90s.

endpoint_suffix can be used to specify Azure endpoint suffix explicitly. Defaults to core.windows.net.

cloud.azure.storage.timeout: 10s
azure.client.default.max_retries: 7
azure.client.default.endpoint_suffix: core.chinacloudapi.cn
azure.client.secondary.timeout: 30s

In this example, timeout will be 10s per try for default with 7 retries before failing and endpoint suffix will be core.chinacloudapi.cn and 30s per try for secondary with 3 retries.

Supported Azure Storage Account types

The Azure Repository plugin works with all Standard storage accounts

  • Standard Locally Redundant Storage - Standard_LRS

  • Standard Zone-Redundant Storage - Standard_ZRS

  • Standard Geo-Redundant Storage - Standard_GRS

  • Standard Read Access Geo-Redundant Storage - Standard_RAGRS

Premium Locally Redundant Storage (Premium_LRS) is not supported as it is only usable as VM disk storage, not as general storage.

You can register a proxy per client using the following settings:

azure.client.default.proxy.host: proxy.host
azure.client.default.proxy.port: 8888
azure.client.default.proxy.type: http

Supported values for proxy.type are direct (default), http or socks. When proxy.type is set to http or socks, proxy.host and proxy.port must be provided.

Repository settings

The Azure repository supports following settings:


Azure named client to use. Defaults to default.


Container name. You must create the azure container before creating the repository. Defaults to elasticsearch-snapshots.


Specifies the path within container to repository data. Defaults to empty (root directory).


Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and unit, for example: 10MB, 5KB, 500B. Defaults to 64MB (64MB max).


When set to true metadata files are stored in compressed format. This setting doesn’t affect index files that are already compressed by default. Defaults to false.


Throttles per node restore rate. Defaults to 40mb per second.


Throttles per node snapshot rate. Defaults to 40mb per second.


Makes repository read-only. Defaults to false.


primary_only or secondary_only. Defaults to primary_only. Note that if you set it to secondary_only, it will force readonly to true.

Some examples, using scripts:

# The simplest one
PUT _snapshot/my_backup1
    "type": "azure"

# With some settings
PUT _snapshot/my_backup2
    "type": "azure",
    "settings": {
        "container": "backup-container",
        "base_path": "backups",
        "chunk_size": "32m",
        "compress": true

# With two accounts defined in elasticsearch.yml (my_account1 and my_account2)
PUT _snapshot/my_backup3
    "type": "azure",
    "settings": {
        "client": "secondary"
PUT _snapshot/my_backup4
    "type": "azure",
    "settings": {
        "client": "secondary",
        "location_mode": "primary_only"

Example using Java:

        .put(Storage.CONTAINER, "backup-container")
        .put(Storage.CHUNK_SIZE, new ByteSizeValue(32, ByteSizeUnit.MB))

Repository validation rules

According to the containers naming guide, a container name must be a valid DNS name, conforming to the following naming rules:

  • Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.

  • Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.

  • All letters in a container name must be lowercase.

  • Container names must be from 3 through 63 characters long.