Kibana: How to solve mapping conflict

Kibana: How to solve mapping conflict

Recently I’ve set up an ELK stack and started feeding it data from application logs. Unfortunately one (and the most important) field was assigned a wrong mapping in Logstash and instead of double it was sometimes interpreted as a string field.

I didn’t have much success googling a quick and easy solution, so here is one.

Kibana was reporting the following error:

Mapping conflict! A field is defined as several types (string, integer, etc) across the indices that match this pattern. You may still be able to use these conflict fields in parts of Kibana, but they will be unavailable for functions that require Kibana to know their type. Correcting this issue will require reindexing your data.

See the source of the problem

As the message says, the problem is that a field has different type in different indices in Elasticsearch. By default logstash creates a new index every day. So every day the values are parsed by Elasticsearch and mapped to data types.

The first step will be to check field mapping in Elasticsearch. Use some interface that is able to issue REST requests, such as console-based curl or Advanced REST Client for Chrome:

Method: GET
URL: http://host:port/_all/_mapping
(host is name of the host where Elasticsearch is running, port will by default be 9200)

Copy the output to a text editor and search for the field name. In our case we have found the filed to be mapped to string in some indices and to double in some other.

Fix mapping

There are two issues we have to fix now: First, we need to fix mapping of existing data. Second, we need to make sure that newly created indices will get correct mapping.

In Elasticsearch we cannot change mapping of data in already existing indices. To fix existing data, we will have to create a new index and copy existing data to it (this operation is called reindexing).

We will start with configuring correct mapping for newly created indices. We will create a new template and configure it to be applied to all indices with names starting with logstash:

method: PUT
URL: http://host:9200/_template/logstash_mapping
body:

{
  "template": "logstash-*",
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "json": {
      "_source": {
        "enabled": true
      },
      "properties": {
        "metric_value_number": {
          "type": "double"
        }
      }
    }
  }
}

In this example we have defined metric_value_number to always map to double. Now we can reindex existing indices. We will use Reindex API and send separate request for each index we want to reindex:

method: POST
URL: http://host:9200/_reindex
body:

{
  "source": {
    "index": "logstash-2016.09.29"
  },
  "dest": {
    "index": "logstash-2016.09.29-reindexed"
  }
}

To check that mapping is now correct, we can again use GET request to http://host:port/_all/_mapping

Now we can remove old indices. We have to be careful as index in Elasticsearch is the actual data storage, not just some recoverable data structure as in relational databases:

method: DELETE
URL: http://host:9200/logstash-2016.09.29

Note: Be careful deleting index from current date as logstash is still writing data to it. Better wait for the following day to have a new index properly initialize and then reindex the full index from previous day.

Refresh view in Kibana

In Kibana web interface, go to Settings -> Indices and click Create in Configure an index pattern form. The error message should have disappeared and you should be able to work with the field with proper type.

2 thoughts on “Kibana: How to solve mapping conflict

  1. This is excellent have been looking for re-indexing stuff for quite some time.
    Great article.

Leave a Reply to Rich Cancel reply

Your email address will not be published. Required fields are marked *