"Fossies" - the Fresh Open Source Software Archive

Member "elasticsearch-6.8.23/docs/plugins/analysis-nori.asciidoc" (29 Dec 2021, 11197 Bytes) of package /linux/www/elasticsearch-6.8.23-src.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming AsciiDoc format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

Korean (nori) Analysis Plugin

The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch. It uses the mecab-ko-dic dictionary to perform morphological analysis of Korean texts.

Installation

This plugin can be installed using the plugin manager:

sudo bin/elasticsearch-plugin install analysis-nori

The plugin must be installed on every node in the cluster, and each node must be restarted after installation.

This plugin can be downloaded for offline install from {plugin_url}/analysis-nori/analysis-nori-{version}.zip.

Removal

The plugin can be removed with the following command:

sudo bin/elasticsearch-plugin remove analysis-nori

The node must be stopped before removing the plugin.

nori analyzer

The nori analyzer consists of the following tokenizer and token filters:

It supports the decompound_mode and user_dictionary settings from nori_tokenizer and the stoptags setting from nori_part_of_speech.

nori_tokenizer

The nori_tokenizer accepts the following settings:

decompound_mode

The decompound mode determines how the tokenizer handles compound tokens. It can be set to:

none

No decomposition for compounds. Example output:

가거도항
가곡역
discard

Decomposes compounds and discards the original form (default). Example output:

가곡역 => 가곡, 역
mixed

Decomposes compounds and keeps the original form. Example output:

가곡역 => 가곡역, 가곡, 역
user_dictionary

The Nori tokenizer uses the mecab-ko-dic dictionary by default. A user_dictionary with custom nouns (NNG) may be appended to the default dictionary. The dictionary should have the following format:

<token> [<token 1> ... <token n>]

The first token is mandatory and represents the custom noun that should be added in the dictionary. For compound nouns the custom segmentation can be provided after the first token ([<token 1> …​ <token n>]). The segmentation of the custom compound nouns is controlled by the decompound_mode setting.

As a demonstration of how the user dictionary can be used, save the following dictionary to $ES_HOME/config/userdict_ko.txt:

c++                 (1)
C샤프
세종
세종시 세종 시        (2)
  1. A simple noun

  2. A compound noun (세종시) followed by its decomposition: 세종 and .

Then create an analyzer as follows:

PUT nori_sample
{
  "settings": {
    "index": {
      "analysis": {
        "tokenizer": {
          "nori_user_dict": {
            "type": "nori_tokenizer",
            "decompound_mode": "mixed",
            "user_dictionary": "userdict_ko.txt"
          }
        },
        "analyzer": {
          "my_analyzer": {
            "type": "custom",
            "tokenizer": "nori_user_dict"
          }
        }
      }
    }
  }
}

GET nori_sample/_analyze
{
  "analyzer": "my_analyzer",
  "text": "세종시"  (1)
}
  1. Sejong city

The above analyze request returns the following:

{
  "tokens" : [ {
    "token" : "세종시",
    "start_offset" : 0,
    "end_offset" : 3,
    "type" : "word",
    "position" : 0,
    "positionLength" : 2    (1)
  }, {
    "token" : "세종",
    "start_offset" : 0,
    "end_offset" : 2,
    "type" : "word",
    "position" : 0
  }, {
    "token" : "시",
    "start_offset" : 2,
    "end_offset" : 3,
    "type" : "word",
    "position" : 1
   }]
}
  1. This is a compound token that spans two positions (mixed mode).

user_dictionary_rules

You can also inline the rules directly in the tokenizer definition using the user_dictionary_rules option:

PUT nori_sample
{
  "settings": {
    "index": {
      "analysis": {
        "tokenizer": {
          "nori_user_dict": {
            "type": "nori_tokenizer",
            "decompound_mode": "mixed",
            "user_dictionary_rules": ["c++", "C샤프", "세종", "세종시 세종 시"]
          }
        },
        "analyzer": {
          "my_analyzer": {
            "type": "custom",
            "tokenizer": "nori_user_dict"
          }
        }
      }
    }
  }
}

The nori_tokenizer sets a number of additional attributes per token that are used by token filters to modify the stream. You can view all these additional attributes with the following request:

GET _analyze
{
  "tokenizer": "nori_tokenizer",
  "text": "뿌리가 깊은 나무는",   (1)
  "attributes" : ["posType", "leftPOS", "rightPOS", "morphemes", "reading"],
  "explain": true
}
  1. A tree with deep roots

Which responds with:

{
    "detail": {
        "custom_analyzer": true,
        "charfilters": [],
        "tokenizer": {
            "name": "nori_tokenizer",
            "tokens": [
                {
                    "token": "뿌리",
                    "start_offset": 0,
                    "end_offset": 2,
                    "type": "word",
                    "position": 0,
                    "leftPOS": "NNG(General Noun)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "NNG(General Noun)"
                },
                {
                    "token": "가",
                    "start_offset": 2,
                    "end_offset": 3,
                    "type": "word",
                    "position": 1,
                    "leftPOS": "J(Ending Particle)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "J(Ending Particle)"
                },
                {
                    "token": "깊",
                    "start_offset": 4,
                    "end_offset": 5,
                    "type": "word",
                    "position": 2,
                    "leftPOS": "VA(Adjective)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "VA(Adjective)"
                },
                {
                    "token": "은",
                    "start_offset": 5,
                    "end_offset": 6,
                    "type": "word",
                    "position": 3,
                    "leftPOS": "E(Verbal endings)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "E(Verbal endings)"
                },
                {
                    "token": "나무",
                    "start_offset": 7,
                    "end_offset": 9,
                    "type": "word",
                    "position": 4,
                    "leftPOS": "NNG(General Noun)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "NNG(General Noun)"
                },
                {
                    "token": "는",
                    "start_offset": 9,
                    "end_offset": 10,
                    "type": "word",
                    "position": 5,
                    "leftPOS": "J(Ending Particle)",
                    "morphemes": null,
                    "posType": "MORPHEME",
                    "reading": null,
                    "rightPOS": "J(Ending Particle)"
                }
            ]
        },
        "tokenfilters": []
    }
}

nori_part_of_speech token filter

The nori_part_of_speech token filter removes tokens that match a set of part-of-speech tags. The list of supported tags and their meanings can be found here: {lucene-core-javadoc}/../analyzers-nori/org/apache/lucene/analysis/ko/POS.Tag.html[Part of speech tags]

It accepts the following setting:

stoptags

An array of part-of-speech tags that should be removed.

and defaults to:

"stoptags": [
    "E",
    "IC",
    "J",
    "MAG", "MAJ", "MM",
    "SP", "SSC", "SSO", "SC", "SE",
    "XPN", "XSA", "XSN", "XSV",
    "UNA", "NA", "VSV"
]

For example:

PUT nori_sample
{
  "settings": {
    "index": {
      "analysis": {
        "analyzer": {
          "my_analyzer": {
            "tokenizer": "nori_tokenizer",
            "filter": [
              "my_posfilter"
            ]
          }
        },
        "filter": {
          "my_posfilter": {
            "type": "nori_part_of_speech",
            "stoptags": [
              "NR"   (1)
            ]
          }
        }
      }
    }
  }
}

GET nori_sample/_analyze
{
  "analyzer": "my_analyzer",
  "text": "여섯 용이"  (2)
}
  1. Korean numerals should be removed (NR)

  2. Six dragons

Which responds with:

{
  "tokens" : [ {
    "token" : "용",
    "start_offset" : 3,
    "end_offset" : 4,
    "type" : "word",
    "position" : 1
  }, {
    "token" : "이",
    "start_offset" : 4,
    "end_offset" : 5,
    "type" : "word",
    "position" : 2
  } ]
}

nori_readingform token filter

The nori_readingform token filter rewrites tokens written in Hanja to their Hangul form.

PUT nori_sample
{
    "settings": {
        "index":{
            "analysis":{
                "analyzer" : {
                    "my_analyzer" : {
                        "tokenizer" : "nori_tokenizer",
                        "filter" : ["nori_readingform"]
                    }
                }
            }
        }
    }
}

GET nori_sample/_analyze
{
  "analyzer": "my_analyzer",
  "text": "鄕歌"      (1)
}
  1. A token written in Hanja: Hyangga

Which responds with:

{
  "tokens" : [ {
    "token" : "향가",     (1)
    "start_offset" : 0,
    "end_offset" : 2,
    "type" : "word",
    "position" : 0
  }]
}
  1. The Hanja form is replaced by the Hangul translation.