Process Buffer filling very fast during peak hours

Hi All

Process buffer filling very fast. Input and output are normal.

Graylog Versio - 2.4
Elasticserach Version - 5.6

I have 7 Gl Nodes and each having the same configuration.

Harware Configuration

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                16
              total        used        free      shared  buff/cache   available
Mem:            29G         23G        210M        1.4G        5.8G        4.1G
Swap:            0B  

server.conf file

output_batch_size = 8000

# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two
# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages
# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1

# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and
# over again. To prevent this, the following configuration options define after how many faults an output will
# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

# The number of parallel running processors.
# Raise this number if your buffers are filling up.
processbuffer_processors = 192
outputbuffer_processors = 320

# The following settings (outputbuffer_processor_*) configure the thread pools backing each output buffer processor.
# See https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html for technical details

# When the number of threads is greater than the core (see outputbuffer_processor_threads_core_pool_size),
# this is the maximum time in milliseconds that excess idle threads will wait for new tasks before terminating.
# Default: 5000
#outputbuffer_processor_keep_alive_time = 5000

# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3

# The maximum number of threads to allow in the pool
# Default: 30
#outputbuffer_processor_threads_max_pool_size = 30

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 524288

inputbuffer_ring_size = 524288
#inputbuffer_ring_size = 131072
inputbuffer_processors = 4
inputbuffer_wait_strategy = blocking

# Enable the disk based message journal.
message_journal_enabled = true

Elasticsearch node is different. No any errors from elasticsearch node. Also ES nodes are working fine.

Please help me in this. Quick response will be appreciated.

1 Like

Hi there. Start with something like 8/4 for processbuffer and outputbuffer processors if you have 16 logical CPUs.

@fangycz Thanks for the reply.

I have already increased:

processbuffer_processors = 192
outputbuffer_processors = 320

Decreasing the value for processbuffer_processors = 8 and outputbuffer_processors = 4 will work? because it’s already commented that # Raise this number if your buffers are filling up. Also what exactly these two line means?

In my environment logs are generated ~10k Per Second. So please help me with the better configuration. And also please let me know if I am missing something.

Thanks

  • having that amount of JVM Heap configured will make your GC very-very long. I would recommend lower that to not more than 10GB

  • When you have 16 CPUs available on each Graylog, your configuration does not fit and is more like random change of settings. Without understand what they are doing.

    inputbuffer_processors = 4
    processbuffer_processors = 192
    outputbuffer_processors = 320
    

    Your over all available processors are the number of CPU cores. So the configured is per node and you do not have 516 Cores availabe, or? In addition you have 7 Graylog Nodes that try 2.240 connections at the same time to your Elasticsearch cluster … And you did not share the Elasticsearch Servers ressources you have …
    Return to the defaults

    processbuffer_processors = 5
    outputbuffer_processors = 3
    inputbuffer_processors = 2
    

@jan You mean I need to change JVM Heap from 22.8 GB to lower than 10GB?

Also is this configuration right?

processbuffer_processors = 192
outputbuffer_processors = 320

please see my edited answer above

Ok got it @jan
According to you, we need to change the configs to this:

processbuffer_processors = 5
outputbuffer_processors = 3
inputbuffer_processors = 2

Is this will help in processing ~10k messages per second.

Also is the below value is correct for our environment?

output_batch_size = 8000
ring_size = 524288
inputbuffer_ring_size = 524288

If you understand what having those settings, leave them as they are:

output_batch_size = 8000
ring_size = 524288
inputbuffer_ring_size = 524288

but if you did not, return to the defaults.

As all environments are snowflakes you can’t predict what helps you or what not. It is carefully crafting and tuning.

Ok, Thanks @jan

Changed the configuration as you have mentioned above. Will monitor it today in peak hour and will let you know if the configs works or not.

Just some info from my experience, I had the same issue too and found it was because I had a lookup table for DNS running on every message coming into a stream. Deleted that and it went back to normal.

1 Like

@jan Still no luck. Used the same configuration that you have sent:

processbuffer_processors = 5
outputbuffer_processors = 3
inputbuffer_processors = 2

Still incoming of messages is very high but outgoing is slow :frowning_face:

and what is your configured batch size?

Now you should check your Elasticsearch Cluster tooo - what ressources has that cluster?

@jan

output_batch_size = 8000
ring_size = 524288
inputbuffer_ring_size = 524288

We are all ok with Elasticsearch resource @jan. Also, I have checked the health of ES cluster and its working fine.

sorry @Tafsir_Alam but my question wasn’t if “all is ok” - my question is what is your configuration. How many servers did you have, what CPU, RAM and Storage did they have, what is your index_refresh setting for your indices and what can be found in all your elasticseach logs.

I can tell you from a few years of working in this area, it is in 90% Elasticsearch having not enough power to work the with messages you try to ingest.

Ok @jan sending you the overall ES master and ES Data node configuration of mine.

ES Data node Resource:

CPU(s):                4

Memory
              total        used        free      shared  buff/cache   available
Mem:            27G         17G        273M        1.4G         10G        7.2G
Swap:            0B          0B          0B

ES Data node (Having 3 nodes with same configuration)

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: graylog
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: es-master01.mykaarma.com
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#

node.master: true
node.data: false
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135", "10.0.15.136"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135", "10.0.15.136", "10.0.15.137"]
discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.132", "10.0.15.133", "10.0.15.134", "10.0.15.135", "10.0.15.136", "10.0.15.137"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.132", "10.0.15.133", "10.0.15.134", "10.0.15.135"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 2
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
action.destructive_requires_name: true

#index.refresh_interval: 30s
#index.translog.flush_threshold_ops: 50000
#indices.store.throttle.max_bytes_per_sec: 1024mb
#thread_pool.search.queue_size : 2000

Logs

[2019-03-14T16:48:34,557][DEBUG][o.e.a.s.TransportSearchAction] [es-master01.mykaarma.com] [batchjob_logs_128][2], node[0UGFKvJ1QAmuymVk4aM6tQ], [P], s[STARTED], a[id=ToA7H4ZvSbCMhHf5f4bIqQ]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[aws_pi_logs_20, infralogs_156, batchjob_logs_128, graylog_1167], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[message], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=20, batchedReduceSize=512, preFilterShardSize=64, source={
  "from" : 0,
  "query" : {
    "bool" : {
      "must" : [
        {
          "match_all" : {
            "boost" : 1.0
          }
        }
      ],
      "filter" : [
        {
          "bool" : {
            "must" : [
              {
                "range" : {
                  "timestamp" : {
                    "from" : "2019-03-14 16:43:34.549",
                    "to" : "2019-03-14 16:48:34.549",
                    "include_lower" : true,
                    "include_upper" : true,
                    "boost" : 1.0
                  }
                }
              }
            ],
            "disable_coord" : false,
            "adjust_pure_negative" : true,
            "boost" : 1.0
          }
        }
      ],
      "disable_coord" : false,
      "adjust_pure_negative" : true,
      "boost" : 1.0
    }
  },
  "aggregations" : {
    "gl2_histogram" : {
      "date_histogram" : {
        "field" : "timestamp",
        "interval" : "1h",
        "offset" : 0,
        "order" : {
          "_key" : "asc"
        },
        "keyed" : false,
        "min_doc_count" : 0
      },
      "aggregations" : {
        "gl2_stats" : {
          "stats" : {
            "field" : "dealer_id"
          }
        }
      }
    }
  }
}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [es-data03.mykaarma.com][10.0.15.136:9300][indices:data/read/search[phase/query]]
Caused by: java.lang.IllegalArgumentException: Expected numeric type on field [dealer_id], but got [keyword]
	at org.elasticsearch.search.aggregations.support.ValuesSourceConfig.numericField(ValuesSourceConfig.java:306) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.support.ValuesSourceConfig.originalValuesSource(ValuesSourceConfig.java:289) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.support.ValuesSourceConfig.toValuesSource(ValuesSourceConfig.java:246) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:51) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:225) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregatorFactories.createSubAggregators(AggregatorFactories.java:210) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregatorBase.<init>(AggregatorBase.java:78) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.<init>(BucketsAggregator.java:48) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregator.<init>(DateHistogramAggregator.java:71) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory.createAggregator(DateHistogramAggregatorFactory.java:80) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory.doCreateInternal(DateHistogramAggregatorFactory.java:74) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregatorFactory.doCreateInternal(DateHistogramAggregatorFactory.java:37) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:55) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:225) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:226) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:55) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:111) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:252) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:267) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:343) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.search.SearchTransportService$6.messageReceived(SearchTransportService.java:340) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1553) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.4.jar:5.6.4]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]

ES Data Node (Having 4 ES Data node with same configuration)

Resource:

CPU(s):                8

Memory:
              total        used        free      shared  buff/cache   available
Mem:            57G         35G        512M        2.9G         21G         18G
Swap:            0B          0B          0B

ES Data node config:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: graylog
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: es-data01.mykaarma.com
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#

node.master: false
node.data: true
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135", "10.0.15.136"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.134", "10.0.15.135", "10.0.15.136", "10.0.15.137"]
discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.132", "10.0.15.133", "10.0.15.134", "10.0.15.135", "10.0.15.136", "10.0.15.137"]
#discovery.zen.ping.unicast.hosts: ["10.0.15.131", "10.0.15.132", "10.0.15.133", "10.0.15.134", "10.0.15.135"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 2
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#bootstrap.mlockall: true
#indices.store.throttle.max_bytes_per_sec: 1024mb

ES Data Node Logs

[2019-03-14T18:07:46,676][DEBUG][o.e.a.b.TransportShardBulkAction] [es-data01.mykaarma.com] [graylog_1167][1] failed to execute bulk item (index) BulkShardRequest [[graylog_1167][1]] containing [1271] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [preferred_date]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:298) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:468) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:591) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:530) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:507) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:458) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:466) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:146) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:115) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:70) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:654) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.4.jar:5.6.4]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
Caused by: java.lang.IllegalArgumentException: Invalid format: "null"
	at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[joda-time-2.9.5.jar:2.9.5]
	at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826) ~[joda-time-2.9.5.jar:2.9.5]
	at org.elasticsearch.index.mapper.DateFieldMapper$DateFieldType.parse(DateFieldMapper.java:240) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DateFieldMapper.parseCreateField(DateFieldMapper.java:465) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.6.4.jar:5.6.4]
	... 36 more

This is all info from my side @jan

what JVM Heap size did you configure for your ES nodes?

@jan

JVM Heap Size (For ES Master Nodes) - Total 3 Master Nodes.

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms16g
-Xmx16g

JVM Heap Size (For ES Data Nodes) - Total 4 Data Nodes.

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms32g
-Xmx32g

Also @jan Frequently receiving this log on ES Data node.

[2019-03-15T12:53:55,448][DEBUG][o.e.a.b.TransportShardBulkAction] [es-data01.mykaarma.com] [graylog_1169][1] failed to execute bulk item (index) BulkShardRequest [[graylog_1169][1]] containing [1274] requests
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [time]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:298) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:468) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:591) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:396) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:373) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:277) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:530) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:507) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.prepareIndexOperationOnPrimary(TransportShardBulkAction.java:458) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:466) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:146) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:115) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:70) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:975) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:944) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:345) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:270) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:924) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:921) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:151) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1659) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:933) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.access$500(TransportReplicationAction.java:92) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:291) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:266) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:248) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:654) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) [elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.6.4.jar:5.6.4]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
Caused by: java.lang.IllegalArgumentException: Invalid format: "Fri"
	at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[joda-time-2.9.5.jar:2.9.5]
	at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826) ~[joda-time-2.9.5.jar:2.9.5]
	at org.elasticsearch.index.mapper.DateFieldMapper$DateFieldType.parse(DateFieldMapper.java:240) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.DateFieldMapper.parseCreateField(DateFieldMapper.java:465) ~[elasticsearch-5.6.4.jar:5.6.4]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:287) ~[elasticsearch-5.6.4.jar:5.6.4]
	... 36 more

that reveals at least something you should fix …

failed to parse [time]

that field is not parsable for elasticsearch

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.