Log are not display in my input


I have create an input syslog udp on the right port and i receive the logs but they aren’t displayed in my input.

im on graylog 4 10core 16gb ram
mongodb 4.0.28
elasticsearch 7.10.2
all are running on the same machine.

if you need something tell me i will post it as fast as possible.


For troubleshooting this issue, have you tried to set the Syslog UDP input to Global instead of specifying the node?


If so, Then I would try using tcpdump to see if messages are reaching Graylog.


I have tried it and when i use tcpdump i receive all my log.
try tcpdump that’s the first thing i did.

1 Like

Lots of possible causes. Can you post a sample message you are sending? Is anything showing in the Processing and Indexing Failures stream?

Are all your components working in the same time zone? Under System/Overview scroll down to Time configuration…


Adding on to @patrickmann @tmacgbay questions.

I wanted to mention this configuration, Notice the red box.


I would suggest using a Global INPUT instead.

Have you tried that?

14:32:10.640396 IP x.x.x.196.5555 > UDP, length 150
14:32:10.641985 IP x.x.x.1925555 > UDP, length 145
14:32:10.644755 IP x.x.x.194.5555 > UDP, length 145
14:32:10.650877 IP x.x.x.191.5555 > UDP, length 145
14:32:10.657000 IP x.x.x.192.5555 > UDP, length 145
14:32:10.657501 IP x.x.x.197.5555 > UDP, length 147
14:32:10.664368 IP x.x.x.196.5555 > UDP, length 150
14:32:10.673379 IP x.x.x.197.5555 > UDP, length 145
14:32:10.676875 IP x.x.x.197.5555 > UDP, length 144
14:32:10.679274 IP x.x.x.197.5555 > UDP, length 144
14:32:10.681885 IP x.x.x.197.5555 > UDP, length 167
14:32:10.682515 IP x.x.x.196.5555 >  x.x.x.20.3517:UDP, length 170
14:32:10.701994 IP x.x.x..191.5555 >  x.x.x.20.3517:UDP, length 145
14:32:10.734199 IP x.x.x..196.5555 >  x.x.x.20.3517:UDP, length 152
14:32:10.796641 IP x.x.x.191.5555 > x.x.x.20.3517: UDP, length 141

and i have tried ‘global’ but i think that change anything because i only have 1 node


Oh wow, 173.273 Index errors that’s a lot.
Looks like you have some index issues to fix. I would check you logs and insure elasticsearch is running correct.

We would need to see the configuration files and the log files.

OK - I see the failure message limit of total fields [1000] has been exceeded
That is why your log message are not being indexed. You will need to resolve that. Here is an article about that issue: What to Do When You Have 1000+ Fields? | Graylog


Good spot @patrickmann! The Article posted to gets into a good description of what is going on - one thing to check for - if you are automatically pulling in fields (ie. using set_fields() in a pipeline from a regex/GROK) it is possible that you are pulling in field names that are randomly changing and causing the unique field types to go through the roof - something to watch out for…

Good catch @patrickmann I think I need new glasses :laughing:

Hello i have try to increase the limit of total fields to 20000 but in my web interface nothing as change ( i have restart all the 3 services for graylog )

And i don’t have configured any pipeline

That seems far outside recommendations. If you are still having issues, examine your logs and post them here. There is a whole series of commands for accessing logs here … it also includes commands to examine elasticsearch and present issues it’s having. You can post the results of those here if you are not seeing/understanding what is presented. Properly formatted Text is always preferred over screen shots…

1 Like

Thanks for your reply, I will check elasticsearch tomorrow.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.