Problems with limit of total fields greater than 1000

I don’t know NXlog but I do know that a majority of the time when people hit the 1000 (2000...?!?!?) field limit it means that they are capturing data for their field names. So for instance capturing the timestamp into a field called timestamp might look this this:

timestamp: 2021-11-30 16:26:08.269 -05:00

But if you did it the other way around where the name of the field IS timestamp:

2021-11-30 16:26:08.269 -05:00 : timestamp

Then every message would create an new field called “2021-11-30 16:26:08.269 -05:00” or its increment … that would contain the data “Timestamp” and every message would have a new and different field with an incremented name.

While the example is not technically possible, its to illustrate the idea that you likely have random names coming in for fields that are causing thee overload.

The command you posted doesn’t really say much other than you applied the contents of index_limit_90day-template.json to Elasticsearch…

Here is a slightly older post that gives some more detail on how to handle it. There are some reasonably significant differences between Elasticsearch 6 and 7… I am guessing the article is written for version 6… you may have version 7… or maybe not… :slight_smile: …so you may need to take that into account.

1 Like