I’m collecting Cisco switches logs and i’d like to replace the default source IP by their hostname.
I’d like to use pipelines for that.
So I created one and connected it to my “Cisco” created stream.
then I added a stage with following code :
has_field(“source”) AND contains(to_string($message.source), “10.56.120.244”)
then if I try the simulator with this config :
it works :
but if I go to my Cisco stream or my dashboard, the source still displays the IP instead of the hostname :
So i’m probablyforgetting something here. I would like my dashboard and my streams to always display the switch name instead of the IP.
Can you help me please?
Use the debug feature to see what is going on in your pipeline - see code below:
On a side note - when posting, us the forum tool </> for your code some it comes across in a readable format. It reads better as below and I switched to the proper double-quotes that can cause problems with a directly copy /paste Graylog hates “ and ” … it much prefers " and "
has_field("source") AND contains(to_string($message.source), "10.56.120.244")
debug(concat("+++Original source: ", to_string($message.source)));
debug(concat("+++Final source: ", to_string($message.source)));
You can watch for the results in the log files with:
tail -f /var/log/graylog-server/server.log
Hi tmacgbay and thank you for your answer and for the </> tip. I will use it.
unfortunately, i have only admin access to the web portal, not the linux server.
is there any possibility to run this debug from the web portal?
that’s strange that this pipeline works with the simulator but doesn’t apply on streams result.
it’s like it’s not linked with the stream. is there a better/faster way to convert source IP into hostname maybe? I don’t have any DNS entries for those switches so I must indicate them manually.
Actually I made it work !
instead of connecting the pipeline to my Cisco stream, I connected it to “All messages”. now it’s taking changes into account…
my problem is solved somehow doing that but i’m curious, why it didn’t take it when applied to the correct cisco stream.
The debug() function only dumps to the Graylog log file - not having access to that will make things more challenging than they need be. It’s hard to tell why a pipeline doesn’t work on your Cisco stream without a full view into all of it… much like your restricted access tot he Linux side… Chances are there is a rule, perhaps in a previous stage, that changes a dependency? Rules in each stage are intended to run semi-parallel but all rules must finish before the next stage of the pipeline can happen, and that depends on how you have defined the stage transition (i.e. Only move to the next stage if at least one rule is true/run). Follow the path a message goes through and check resulting fields… hopefully you will find the hitch.
in this specific case, i only have this stage 0 with multiple but similar rules.
Then y rules are all done the same way :
has_field("source") AND contains(to_string($message.source), "<ip_of_the_switch>")
if I choose the cisco stream for the simulator it works, but in reality, in only works if I connect pipeline to default “all messages” stream
this rule is quite basic I think or maybe it’s the way I created the Cisco stream/Input ?
I already asked my datacenter team to give me access to the linux server directly, but it has been refused
Just in case there is a rule conflict, you can put that rule in stage “-1” to make sure it runs before all the other rules…It is definitely odd behavior…
You could ask the datacenter if they can somehow forward the log file to you…
First of all, I hope you can understand even if the explanation is strange because I am not good at English.
Try using route_to_stream(name:“Stream_Name”) for the rule source value of the pipe-line.
But I wonder if there is any need to use a pipe-line.
Because it’s simple with Input’s Extractor.
Create ‘/etc/graylog/server/lookup-table.csv’ file and enter the following contents.
Create a Lookup Table in System/lookup Tables of Graylog Webportal.
- Create Data Adapter
- Click Create data adapter on the Data Adapters page.
- Select CSV File in Data Adapter Type.
Key column and Value column are important.
- Select source for Key cloumn and hostname for Value column.
- For the rest, enter appropriate values and click Create Adapter.
- Cache creation
- Click Create Cache on the Caches page.
- Select Node-local, in-memory cache in Cache Type.
- Enter an appropriate value to create a cache for the created data adapter.
- Create Lookup Table
- Click Create lookup table on the Lookup Tables page.
- Select data adapter and cache to create lookup table.
- On the System/Inputs page, create a extractor by going to Manage extractors of the created Input.
- Get stated > Load Message
- Designate the lookup table created earlier in Lookup Table.
- Click Select extractor Type in source and select Lookup Table.
Condition: Always try to extract
Enter source in Store as field.
Select the Extraction strategy as Cut.
Enter Extractor title appropriately and click Update extractor.
Enter hostname in Store as field.
Select the Extraction strategy as Copy.
Enter Extractor title appropriately and click Update extractor.
In the case of Cut, the IP value of the source field is replaced with the hostname. (Refer to the “lookup-table.csv” file)
In the case of Copy, the hostname field is created based on the value of the source field and information of the hostname corresponding to the source ip is output.
This will make it easier to manage information for added devices by only updating lookup-table.csv.
sorry for the delay.
I just tried to put stage -1 but it does exactly the same.
so i’m not sure what’s wrong here. I only have this pipeline rule active, nothing else.
Hi rockJ and thanks for your help,
I tried this route to stream option but it is the same result.
now I’m thinking that in the Cisco stream, I ticked this option “remove matches from all message stream” :
Could that explain my issue?
Regarding your second option, unfortunately, as I said, I only have access to the web interface, not the linux CLI. I would need to request datacenter helpdesk everytime I have a new switch to add it in the CSV file which is hardly an option for me
How do you have your Message Processors Configuration set up? Below is mine. Occasionally I have seen rules/pipeline NOT working as expected when the Message Filter Chain comes AFTER the Pipeline Processor.
Good point ! this is what I have (I never touched this actually) :
so if I get it right, Pipeline Processor should be after Message Filter Chain?
I believe you want Message Filter Chain first. It trips people up all the time. Test it out.
I’m beginner with Graylog. I didn’t notice this setting before to be honnest.
not sure why it is like this by default. maybe i’m not configuring my syslog how it should.
at least it’s working fine now.
many thanks for your help and support
have a nice day
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.