Hey Guys, since the bigger use of Graylog i run into an error. It dosnt effect my work but i want to “remove” the failure.
ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Limit of total fields [1000] has been exceeded]]
I dont want to execute the command for every new Index
PUT test_index/_settings
{
"index.mapping.total_fields.limit": 5000
}
Is there a option to change the fields to dynamic or increase the number of fields permanent.
PS: Its a bigger Production with round about 55GiB Logs per day (100.000.000 Messages)
To my knowledge there is no way to permanently set this, and for good reason. This has negative performance impact on the elasticsearch/opensearch cluster.
Are you able to separate your logs into different streams/index sets to split ip the fields? Also are you parsing json into fields?
You can also try adding a remove_field pipeline rule to remove any unwanted fields. What is nice about this pipeline function is that it accepts regex patterns, so for example, say I wanted to remove ALL fields that start with winlogbeat_ , i could do something like:
rule "TEST remove_field using regex"
when
has_field("beats_type")
&& to_string($message.beats_type) == "winlogbeat"
then
remove_field(
"^winlogbeat_.*"
);
end
I’m on my way to create streams and cut these log out of the default stream. I don’t know if it possible to remove fields without edit the plain Log? The laws in Germany are very strict for editing possible evidence…
If you are just parsing out fields from a message field (how you would with syslog etc) then removing a field won’t edit the message itself. This is notmally also possible with windows logs from winlogbeat etc as it includes both the seperate fields and the message field. The message field will probably show the “original” and you can keep just the fields you need as data for your needs.