Hey guys, I let my / disk fill to capacity on my Graylog 3.1.2-1 server and messed something up to where Search stopped adding any new logs.
server.log was filling with::
2019-10-09T17:19:46.328-07:00 ERROR [Messages] Failed to index [8] messages. Please check the index error log in your web interface for the reason. Error: One or more of the items in the Bulk request failed, check BulkResult.getItems() for more information.
Collection containing a total of 204,800 indexer failures.
graylog_4 2d716d65-ead6-11e9-9cb3-0050568c3c52 {"type":"cluster_block_exception","reason":"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"}
Not being attached to logs I decided to start over as best I could and delete all graylog* Indexer’s like so:
# curl -XGET http://127.0.0.1:9200/_cat/indices/
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open graylog_2 mtCmjgI2SNSUsbds2D_iDQ 1 0 20000057 0 5gb 5gb
green open graylog_0 F-nOmEQMQPid6APFh646Sg 1 0 20004664 0 6.2gb 6.2gb
green open graylog_1 Pmye7tQ-RdeBf7INFw-Lbg 1 0 20014157 0 1.5gb 1.5gb
green open gl-system-events_0 4fKyvhTmQp2dXV9ZPk6J8w 1 0 0 0 261b 261b
green open gl-events_0 Umd5rVTOQbuaTQS8xtinmA 1 0 0 0 261b 261b
green open graylog_4 BcGGo6PnSZuMPjirovNNKQ 1 0 9118421 0 4gb 4gb
green open graylog_5 CsMpOaLURk-gBTvupgxK9g 1 0 0 0 261b 261b
green open graylog_3 NZPomkjKSFeY-JN4OHIohg 1 0 20000055 0 6.2gb 6.2gb
# curl -XDELETE http://localhost:9200/graylog_*/
# curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
# curl -XGET http://127.0.0.1:9200/_cat/indices/
green open gl-events_0 Umd5rVTOQbuaTQS8xtinmA 1 0 0 0 261b 261b
green open graylog_0 FKlCKPenS1mx-NblHK7doQ 1 0 0 0 261b 261b
green open gl-system-events_0 4fKyvhTmQp2dXV9ZPk6J8w 1 0 0 0 261b 261b
I no longer see any [ERRORS] in my server.log and the only WARN I see is
2019-10-10T09:56:33.850-07:00 WARN [UdpTransport] receiveBufferSize (SO_RCVBUF) for input SyslogUDPInput{title=Syslog 7514/udp, type=org.graylog2.inputs.syslog.udp.SyslogUDPInput, nodeId=null} (channel [id: 0xec4c0007, L:/0:0:0:0:0:0:0:0%0:7514]) should be 262144 but is 425984.
1 Active Node reporting
The journal contains **-109,153,944 unprocessed messages** in 1 segment. **50 messages** appended, **0 messages** read in the last second.
Inputs show INBOUND logs, KafkaJournal reports write messages, JournalReader reports Messages written…
I thought purging the Indices might get me back up and searching but Elasticsearch generated a second graylog_0 index and then I had a STARTED & UNASSIGNED set…
# curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason
gl-events_0 0 p STARTED
gl-system-events_0 0 p STARTED
graylog_0 3 p STARTED
graylog_0 3 r UNASSIGNED INDEX_CREATED
graylog_0 5 p STARTED
graylog_0 5 r UNASSIGNED INDEX_CREATED
graylog_0 1 p STARTED
graylog_0 1 r UNASSIGNED INDEX_CREATED
graylog_0 2 p STARTED
graylog_0 2 r UNASSIGNED INDEX_CREATED
graylog_0 4 p STARTED
graylog_0 4 r UNASSIGNED INDEX_CREATED
graylog_0 0 p STARTED
graylog_0 0 r UNASSIGNED INDEX_CREATED
So after much searching and trial and error I was able to get all the shards into STARTED like so…
# curl -H "Content-Type: application/json" -XPUT localhost:9200/*/_settings -d "{ \"index\" : { \"number_of_replicas\" : 0 } }"
Which looks better…
# curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason
gl-events_0 0 p STARTED
graylog_1 2 p STARTED
graylog_1 1 p STARTED
graylog_1 3 p STARTED
graylog_1 5 p STARTED
graylog_1 4 p STARTED
graylog_1 0 p STARTED
gl-system-events_0 0 p STARTED
graylog_0 2 p STARTED
graylog_0 3 p STARTED
graylog_0 1 p STARTED
graylog_0 5 p STARTED
graylog_0 4 p STARTED
graylog_0 0 p STARTED
Even after all this I’m still seeing " Nothing found" in my search criteria…
At this point I just want to start over but keep my Dashboards, Extractors, etc…