Can't get index message counts after 2.2.6->2.3.1 upgrade

When I try to go to my System->Indices->(indexID page) I get a 500 Internal Server Error and “Fetching message count failed ofr indices [graylog2_xxxxx]” and so forth. When I use the API directly, I get the same thing:

curl -v -u egreen

HTTP/1.1 500 Internal Server Error

{“message”:"Fetching message count failed for indices [graylog2_4426, graylog2_4547,

(lists all of my indexes, 270 of them).

I’ve already run ‘Recalculate Index Ranges’ twice, is there a ‘Recalculate Message Counts’ that I need to run too?

I am using ElasticSearch 2.4.6, I have not upgraded to ElasticSearch 5.2 yet. Do I need to do so in order for my index counts (and indexing in general) to work right?

What’s in the logs of Graylog and Elasticsearch when this error message occurs?

Nothing in the Graylog logs, they show no error at all. Let me check the Elasticsearch logs – have to find the cluster master… hmm…

I’m seeing a ton of these in the logs:

[2017-09-09 02:08:31,760][WARN ][http.netty ] [elasticsearch2] Caught exception while handling client http traffic, closing connection [id: 0x70d0c088, / => /]
org.jboss.netty.handler.codec.frame.TooLongFrameException: An HTTP line is larger than 4096 bytes.

My guess is that this is the issue. Let me see if there’s an option to increase the HTTP line length in the version of Elasticsearch that I’m using…

Yep. That did it. Configured my elasticsearch cluster with

http.max_initial_line_length: 64k

in the config file and now I get my index listing. I didn’t even think of looking at the Elasticsearch log because surely an error would appear in the Graylog log saying it couldn’t make an Elasticsearch call if it couldn’t make an Elasticsearch call, right? (Nope).

This looks like a bug in Graylog. It seems like we’ve missed that bit when fixing a similar issue in Graylog 2.3.1:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.