In our integration environment, there are two graylog nodes within a Kubernetes cluster. The process buffer is full (input buffer and output buffer are empty).
I have been trying to tune the performance by modifying the # of the processor for each buffer, besides increasing heap size. The # of out message did reach 3,000~4,000 per second for a while, then it dropped back to ~300 per second without any configuration change.
Question:
What may cause the process buffer full while others are empty?
Why the number of out messages changes while no configuration change?
What parameters should I try to config?
There are indexer failures with message:
{“type”:“mapper_parsing_exception”,“reason”:“failed to parse field [metricValue] of type [float] in document with id ‘cbbff660-46bd-11ea-9de4-0a580ae96622’”,“caused_by”:{“type”:“illegal_argument_exception”,“reason”:"[float] supports only finite values, but got [NaN]"}}
Could this the cause? How to fix it?
Please near with me, a beginner of Graylog. It is really appreciate if you can point me to any step by step guidance.
every time performance issue, like your regex. Give more CPU instead of set the number of processors.
Please read the error message.
{“type”:“mapper_parsing_exception”,“reason”:“failed to parsefield [metricValue] of type [float] in document with id ‘cbbff660-46bd-11ea-9de4-0a580ae96622’”,“caused_by”:{“type”:“illegal_argument_exception”,“reason”:“[float] supports only finite values, but got [NaN]”}}