Gl2_processing_error after Upgrade to 4.3.6 / OpenSearch

After the migration from ElasticSearch to OpenSearch all my messages have a new field “gl2_processing_error” added.

I don’t know why this happens.

Here are some examples:

Could not apply extractor <sad (ec4de4c0-2ecb-11ed-9f59-0242319c6431)> - Unexpected character ('.' (code 46)): Expected space separating root-level values ...

Could not apply extractor <sad (ec4de4c0-2ecb-11ed-9f59-0242319c6431)> - Unrecognized token 'Inbound': was expecting 'null', 'true', 'false' or NaN
 at [Source: (String)"Inbound Message ...
Could not apply extractor <sad (ec4de4c0-2ecb-11ed-9f59-0242319c6431)> - Unrecognized token 'Caused': was expecting ('true', 'false' or 'null')
 at [Source: (String)"Caused by: javax.ws.rs.core.NoContentException: Missing entity. ...

What is the reason for this?

Hello

This normally means Elasticsearch is having trouble indexing those logs so it Appends a field gl2_processing_error on it to help troubleshoot the issue. From the logs you have an Extractor acting up that ES doesn’t like the field in there. Have you tried manually rotating the indices? Sometimes that helps when upgrading issues with ES occurs.

You can didg into mongoDB to find out which one.

mongo > db.inputs.find().pretty()
Example:

{
       "_id" : ObjectId("619318e9d1f2fd03dc7b4b4c"),
       "creator_user_id" : "greg.smith",
       "extractors" : [
               {
                       "creator_user_id" : "greg.smith",
                       "source_field" : "message",
                       "condition_type" : "none",
                       "title" : "linux_useradded",
                       "type" : "regex",
                       "cursor_strategy" : "copy",
                       "target_field" : "useradded",
                       "extractor_config" : {
                               "regex_value" : "name=\\s*(\\S+),"
                       },
                       "condition_value" : "",
                       "converters" : [ ],
                       "id" : "cd7e6750-07e2-11ed-b1c0-00155d601d11",
                       "order" : NumberLong(0)
               }
       ],

So I did a test for you, on a bad extractor

gl2_processing_error;

Could not apply extractor <Graylog_traffic (11772500-4465-11ed-bc61-00155d601d11)> - Unexpected character (‘-’ (code 45)): Expected space separating root-level values
at [Source: (String)“2022-10-05 17:50:05,100 DEBUG: org.graylog2.rest.accesslog - 10.111.111.111 5e224e7683d72eff75055199 [-] “GET api/system/jvm” Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 200 -1”; line: 1, column: 6].
level

Could not apply extractor <Graylog_traffic (11772500-4465-11ed-bc61-00155d601d11)> - Unrecognized token ‘at’: was expecting (‘true’, ‘false’ or ‘null’) at [Source: (String)“at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:727) ~[graylog.jar:?]”; line: 1, column: 3].

What is happening in my setup is this extractors is try to execute on every log with message field. A quick adjustment to apply restriction to only extract message with specific strings in it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.