Apply Extractor Rules based on data input

Hello,

I have the below setup:

I have 2 csv files file1.csv and file2.csv which I am pushing with collector/filebeat to the Graylog Server.
The Graylog Server is running a single beats input on port 5044.
I have created 2 configurations for each csv file with the proper tags and it is working fine.
The configurations are also set to add a field “SOURCE” with values S1 and S2 respectively so that I can identify each csv file form data.

file1.csv has 2 columns Col1A, Col1B
file2.csv has 2 columns Col2A, Col2B

Now, in the inputs section, I added an extractor to convert CSV to fields.
Using the Copy Input function I specified that the column names are Col1A and Col1B.
But this rule will get applied to all data that the inputs receives which I don’t want.

I want to be able to add another CopyInput section to specify fields Col2A, Col2B for the second file as well.

I am unable to achieve this.
Please suggest what is the best way to achieve this.

Note: I would prefer to have a single beats input on Graylog Server.

Hej @sachin

switch from extractor over to processing pipelines. That will enables you to have the column names for each input file individual.

regards
Jan

Thanks @jan
I explored pipelines and it is wonderful.
But I think Extractors are also very helpful.
The only feature that they are missing now is the ability to apply a content-based filter rule for the Extractors to work.

Are there any plans to have that in extractors in future?

Hej Sachin,

the content rule is available - but only for the same field. You can not refer to another field.

And I think that will not be implemented. But you might want to raise a feature request for that.

regards
Jan