I have seen this a few times now, restarting generally fixes the issues though. I have had a couple of times now, my systems will take in messages but will not output messages anymore.
2017-08-08_16:50:30.56996 [DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [koB3YKZ] failed to put mappings on indices [[[graylog_519/ET4HI3gYQpOs$
2017-08-08_16:50:30.57306 java.lang.IllegalArgumentException: Limit of total fields [1000] in index [graylog_519] has been exceeded
I think this is the pertinent log. How do I increase the ES fields limit which I believe is by default 1000 based on this conversation.
This being said, I think having 1000 different fields in one index is a bit excessive and you should think about splitting up your messages in different index sets, so that each index has less than 1000 different fields: http://docs.graylog.org/en/2.3/pages/configuration/index_model.html
That seems very reasonable, do you have a recommend time you use? I am currently rotating my index once a day, with a average of about 150 messages a second.
I am thinking of just splitting it in half, what are your thoughts on that?
Try splitting your one-size-fits-all index set into multiple index sets, e. g. one for your application logs and one for your network appliances logs (or similar).
It’s NOT about changing the index rotation/retention settings of an index set.
My network index is still hitting 1000 fields in a 24 hour period according to my logs. Should I further split these logs into 6 hour chunks to alleviate this?
You can try, but having 1000 fields in a single index still sounds wrong to me.
What type of log messages do you record in that index set? Maybe normalizing these logs at an earlier stage (e. g. using extractors or pipeline processing rules) would make sense.
The Syslog input tries to be smart about the non-standard Fortigate syslog messages, but seems to fail on the URL in your logs because of special characters (like =). The Raw/Plaintext input doesn’t do any parsing.
First I updated the extractor I got from Graylog Marketplace to extract the URL to the following, so far it appears to have correct but I won’t know for a few hours.
I am not getting anymore errors from my Network Index but my default index is still showing it has more than 1000 fields and is throwing errors. Is there a way to see what field(s) is tripping up this index?
Here is the error I am seeing.
Caused by: org.apache.lucene.queryparser.classic.ParseException: Cannot parse 'SGAWKZkziGhZA:': Encountered "<EOF>" at line 1, column 14.