Duplicate message in results of query on specific stream

1. Describe your incident:

I currently have two streams which each use a separate index. Lets call these streams A and B:

  • A is a stream which does not have any rules, hence any message received is stored in its index
  • B is a stream which has a rule defined on level: only messages with a level smaller than 6 are accepted by this stream. Hence the messages in B’s index form a subset of those of A.

Now, I perform a search with having B explicitly selected. I would now expect that only the messages stored in B’s index show up in the results. However for some messages both a result from A’s index as well as B’s index is returned, resulting in a duplicate message in the message table.

As a concrete example, consider the images below. Sensitive information is redacted and color-coded where necessary for illustration.

In this example A is the top Stream. The rule present is a necessary field to make sure it originates from a correctly configured application server. B is the Anomalies stream at the bottom.


In the image above I would want only results of the Anomalies stream, as is shown in the top right. Although the red result is only shown once in the message table, both the blue and green messages are shown twice.

For both green and blue, one of the results is saved within the Messages index set and the other in the Anomalies index set.

2. Describe your environment:

  • OS Information: Ubuntu 20.04 LTS

  • Package Version: Graylog v4.1.9+bb3e2e8

3. How can the community help?

I was wondering whether this phenomenon is known to other people, and whether there is a way to solve this issue. Moreover if this is due to invalid configuration I would also be happy to know so we can improve our Graylog usage. Feel free to request other information if necessary to help me solve this issue. Thanks in advance!

That does seem very strange - Maybe try placing quotes around "failed with a time out" …without that each is a search term and the space between maybe handled as an OR… or maybe not… Also, when you click on and open the message it will tell you all the streams where it is stored. (shown below) I would be surprised if it were in more than one in your case, but good to know anyway.

image

Sadly, the query does not seem to influence the presence of duplicate values.

Looking at the screenshot above, both duplicates of the message seem to be routed into both streams. However, one is stored in the Anomalies index, while the other is stored in the All index.

I would assume that only messages in the Anomalies index are shown when filtered, which seems to be confirmed by the fact that these duplicates are not present for all results, but only occasionally.

Edit: Note that also for the messages that are shown once, the Routed into streams also states both streams.

Further investigation into the ElasticSearch has also not provided me with an explanation, as comparing an indexed message which is not duplicated in the search to an indexed message which is duplicated in the search does not show any differences.

Comparing the messages within the All index and Anomalies index shows that for both duplicated and not-duplicated messages, all fields remain the same, except for the _seq_no property, which is expected, as this differs on an index basis.

Note that comparing a non-duplicated message with a duplicated message, the following fields are different:

  • _id
  • _seq_no
  • gl2_message_id
  • message
  • full_message
  • timestamp
  • uuid: A custom field which is contains a generated UUID.

I would honestly not suspect that any of these properties would be responsible for the observed situation.

Also note that the time window which is used in the query does not influence the outcome; the results duplicated in one query remain duplicated in the other and the non-duplicated results also remain not duplicated.

If you are specifically searching against one stream, that should not land you in two different indexes… In your first post you are restricting to the ___anomalies stream … is the same restriction applied to the search that is resulting in the post that has a message in the ____all_146 index and in the ____anomalies_145 index? That doesn’t seem right. Also - are you using a pipeline rule to shunt the message? Maybe you can post relevant rules that are applied? (use the </> formatting tool :slight_smile: ) lastly, what version of elastic are you on?

Hello ,
When you created the stream did you click the tic box?

is the same restriction applied to the search that is resulting in the post that has a message in the ____all_146 index and in the ____anomalies_145 index?

The same restriction is applied; I am filtering on the __anomalies Stream as in the screenshot of the original post.

Also - are you using a pipeline rule to shunt the message? Maybe you can post relevant rules that are applied?

I am not sure what you mean by shunt, would you care to explain? These are the rules applied to the streams:

  • For the All Stream:
application_server must be present
  • For the Anomalies Stream:
level must be smaller than 6

what version of elastic are you on?

I use ElasticSearch 6.8.10, using the docker image docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10.

@gsmith For the All stream, this checkbox is checked; for the Anomalies stream it is not.

This post here in Github looks like a similar issue… that is older yet perhaps still unresolved. It looks like having two streams set up against an input is perhaps not an expected methodology for separation as you require? If I were in this situation, I would use a pipeline rule applied to the catch all stream that would shunt the message over to the anomalies stream/index if level was lower than 6. You had asked about what I meant by shunting. In the pipeline attached to the catch all stream you would have a rule similar to this:

rule "low level shunt to anomalies index"
when
    to_long($message.level) < 6             
then
    route_to_stream("odd_anomalies");
    remove_from_stream(); //may not be needed, removes message from catch all stream    
end

I haven’t tested this - just coming off the top of my head.

1 Like

Thank you for your help! It indeed seems that the GitHub issue describes the same observations and after verifying this in my use case, it is indeed also the case that the duplicates occur only for indexes which are not the most recent, i.e. after the index rotation window.

I do have some questions about the shunting example you posted, let alone whether the code actually works:

  • How would I make sure that exclusively messages from the shunting pipeline are processed within the Stream and that no messages are processed from other origins (e.g. from the process that is currently in place)?
  • Are you able to confirm that the described problem does not occur when the shunting is applied?
  • So to rephrase your shunting example in my setting: I would make the All stream the shunting stream, which receives all messages. I then add a pipeline to this Stream with a Stage for each additional Stream which shunts the messages based on additional conditions. e.g. for the Anomalies stream I would create a stage which forwards all messages with a level lower than 6 to the Anomalies stream. Does this way of phrasing it sound correct to you?

I am not sure that using a pipeline is a solution to the duplication problem but since both the GitHub incident and yours are based on sorting messages based on stream rules rather than pipeline, it isn’t a large issue affecting many people, and I use pipelines to move(shunt) messages to different streams/indexes without a duplication issue (that I know of)… I think it’s a good bet. Really it also focuses the tools for what they are made for.

For the pipeline, you can have multiple Rules in a stage and each stage runs all it’s rules essentially parallel… meaning your only control of rule order is placing them in successive Stages. So if you have several rules that simply sort incoming messages, you can maintain them in a single stage. If you want something to happen after a rule finds a message and shifts it to a new stream, you can put more rules to follow in the next stage of that initial stream. It’s important to note that if you shunt a message to another stream, it will finish all rules and stages in the current pipeline before starting the new stream (and new pipeline if one is attached to the new stream) You can mitigate that by adjusting any (initial stream) following rules/stages with further revision of what executes those rules or setting flag field to ignore…

So in the case of this issue, you would attach to the your all_message stream - one pipeline with one stage that contains a series of rules that sort each message to its relevant new stream/index.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.