Graylog nx log help

Hi.
Trying to collect logs and analyze with graylog.
using the system
CentOS Linux 7 (Core)
Graylog

MongoDB

Elastic Search

Java

С помощью winlogbeat # Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

output.logstash:
hosts: [“my ip adress:50443”]
path:
data: C:\Program Files\Graylog\sidecar\cache\winlogbeat\data
logs: C:\Program Files\Graylog\sidecar\logs
tags:

  • windows
    winlogbeat:
    event_logs:
    • name: Application
    • name: System
    • name: Security

Filebeat # Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

output.logstash:
hosts: [“my ip adress:50443”]
path:
data: C:\Program Files\Graylog\sidecar\cache\filebeat\data
logs: C:\Program Files\Graylog\sidecar\logs
tags:

  • windows
    filebeat.inputs:
  • type: log
    enabled: true
    paths:
    • C:\logs\log.log

NXlog define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

Module xm_json


Module xm_syslog


Module im_msvistalog









Module om_tcp
Host 10.1.56.43
Port 12201
Exec to_syslog_ietf();

<Route 1>
Path eventlog => out

Now there is another problem.
I see the logs are coming in but are not displayed in the list.
The following settings have been applied.
.
link screen
Sign in to your account

Going to the system \ sidecars showmessages tab, I don’t see any new logs.
I cleaned the folder where the logs file:///var/lib/graylog-server/journal/ did not help, the logs are not displayed. I’ve read the manual and can’t find how to solve the problem…

Hello && Welcome @Alex25

You will have to wait till Elasticsearch gets done, noticed this.

Depending on your resource this may take some time.

Also when Output Buffer are over 90% chance ar this may be a configuration error in Graylog config file or you don’t have enough resource to handle that many logs.
I would serious look into you Local log file to see if there are any errors or warning.

examples

tail -f /var/log/graylog-server/server.log

tail -f /var/log/elasticsearch/graylog.log

curl -XGET http://192.168.1.100:9200/_cluster/health?pretty=true

I don’t read/speak Russian.

Ok
system characteristics
4CPU
16Gb Memory
50 Gb hdd
100 second HDD

What parameters need to be increased to use more resources for processing?

Ok,
well I would stop ingestion logs till the Output buffer clears, Deleting Journal is not good.

Graylog Configuration file /etc/graylog/server.conf
These are the ones you need to know about if they look like this…

processbuffer_processors = 5
outputbuffer_processors = 3
inputbuffer_processors = 2

You should have 10 CPU core on this server, Each one creates a thread, FYI …

Like I said it could be a configuration error OR even a bad GROK or REGEX .

Thanks, I’m checking now.
Grok, I’m not at a meeting, I think everything is there by default …

Some advice , when you see the Output buffer fill like that , chances are something is with Elasticsearch.

I tried to expand the collection of logs in the previous settings, when setting up elasticsearch, I configured the minimum indexes accordingly.
rotate_strategy = number
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
elasticsearch_shards = 1
elasticsearch_replicas = 0
I have corrected the index settings in the screenshot, because a large number was set by default.
What do you mean wrong with him?

Oh boy,
I would keep those as they were, If you need to correct you index I would do that through the Web UI.

Did you see what I posted above???

Not sure, must be a lack of comminution.

checked . such data is worth it.
checked . such data is worth it. and corrected in the indexes.
I meant that if I see something wrong with the elastic, it takes a long time to process, it’s not clear how to fix it and the reason. :slight_smile:
I stopped collecting logs. I correctly understood that it is necessary to wait for the processing of the logs that have accumulated in the queue …

1 Like

Awesome, :+1:

Yeah I think the main issue is that your Output buffers are filled, when that go back down to 0% you should see something.

make sure elasticsearch is Good, you can insure this by this command.:point_down:

curl -XGET http://127.0.0.1:9200/_cluster/health?pretty=true

I have more questions.
how to increase buffer processing? :smiling_face:

:laughing:

Man, You want me to come over there and set that up for you. LOL

Go into you Graylog configuration file and look for those configuration I just showed you above. I’ll post it again, Should look like this, Be careful IF you don’t have the resources “CPU”, I would not increase those… I would add more CPU cores to the Graylog server instead.

processbuffer_processors = 5  
outputbuffer_processors = 3  
inputbuffer_processors = 2

Why didn’t you just say lol. :rofl: :joy:
Yes, I have these settings…

Thank you for your help.
I will give feedback on the result.

I checked.
Increased the CPU to 10.
I started to run the collection, I see that the collection of logs is in progress.
But in the menu sidercars show message I do not see the logs that are coming.
I only see in the metric that the counter is increasing.

Screenshots attached if it helps…

https://slata365-my.sharepoint.com/:w:/g/personal/a_chernyavskiy_slata_com/EdWyI9mY0EdMnaAltkwUaKcBPQYlp2LyGNPAN4ejR92AOw?e=bhaCgC

I see you using GELF TCP for nxlog input. What does your NXLOG configuration look like?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.