Checking Elasticsearch index version in preparation for upgrade to v3.5 as instructed. When I run the recommended command, I get a “connection aborted” error. Not sure how to check the index version otherwise. I have a single Graylog server that connects to a 2-node elasticsearch cluster
http :9200/_settings | jq ‘[ path(. | select(.settings.index.version.created < “5000000”)) ]’
http: error: ConnectionError: (‘Connection aborted.’, error(113, ‘No route to host’)) while doing GET request to URL: http://localhost:9200/_settings
The problem seems simple enough: you’re telling Curl, or whichever tool you’re using, to connect to “localhost”. I’m assuming you’re running the command on the box that actually runs ElasticSearch. Of course it’s odd that the “localhost” alias wouldn’t work.
“Onwards, to glory!”
Run: getent hosts localhost
Run: grep -i localhost /etc/hosts
Run: netstat -an | grep ^tcp | grep 9200
Run: nc localhost 9200 or netcat if that’s the name on your distro.
Question is: does ElasticSearch run on this box? And is its listener configured to listen on the “localhost” address?
Thanks for the tip. I added the hostname of the Elasticsearch node that I was running the command on, and I did get some data returned. I’m still not 100% sure if I’m running v1 indexes though. Here’s the output of the command:
Either that looks like a directory listing of some sorts, or you have plenty of indices for weird stuff in your ES. I mean, why would a .exe show up, in the same list as a .PHP and a .PL? This is a very odd listing.
I agree with you. This looks like a directory listing. I have two index sets, the default one, and one that I made. Can’t figure out how to tell if they’re v.1 or not. The command I used is copied right from Graylog’s v2.5 upgrade announcement page.
If this command returns no index names, you can upgrade from Elasticsearch 5 to 6 without additional manual steps required
It should be clear that you have indices that are created with a version prior to Elasticsearch 5 … That this command return such strange indices let me think that someone just played a little game with your elasticsearch. I hope this is not reachable from outside of your Network.
If your Elasticsearch is only used by Graylog, I just would drop these indices.
Thanks for your response Jan. I’m a bit confused though. None of the names that were returned by the command are index names in my elasticsearch environment. My graylog/elasticsearch environment is not reachable from outside, and I’m the only person who works on it. When I list my Index sets in Graylog, only the two expected index set names show up. I’m going under the assumption that his command is not returning the correct data
Graylog will show only indices it will work with - no additional indices it does not expect to be present.
Run the following to get a list of all indices in your ES:
http myHostname:9200/_cat/indices?v
Only indices you can identify when checking System > Indices (default name graylog_XXX where XXX is a number) or all self-defined indices can be easily identified because of the naming schema.
All other indices are not created by Graylog and can be deleted if this ES is only used by Graylog.
You nailed it Jan. The files shown in the command output showed up in the list of indices from the command line, but not in Graylog. They all showed no content from an elasticsearch point of view, and I was able to successfully delete them using “curl -XDELETE http://MyHostname:9200/strangeFileName/”. All is right in the world. I’m going to keep my eyes open to see if the strange files comes back though. Thanks for your help
@sapplega I saw similarly named files on several of my graylog nodes after our security team ran one of their scanning tools on the network. I think the scanner finds Elasticsearch nodes and tests their security (or lack of it?) by creating these indices.
BTW, I found them while looking at the graylog nodes using the kopf (now cerebro) tool. I used the same tool to delete them.
My graylog/elasticsearch environment is not reachable from outside, and I’m the only person who works on it.
Are you dead sure about that? Could it be internal pen-testers doing their usual job? Or could it be a bot moving through your company’s network! Either way it’s a sign that you should ping your CERT. Because if it’s internal guys they’d like to know that you “caught” them and if it’s an actual bot or attacker… well, obviously your CERT needs to know!