I see a similar issue reported here, but thread was locked so I couldn’t pile on.
New install inside a docker container with single source (dns server) pushing logs in so far. I created multiple extractors for queries and responses, then realized as I was creating a third that I could merge them all into a single extractor. So, I deleted the two existing and reworked the third to cover all message types.
But incoming messages are still being parsed by the now deleted extractors based on the field names being created. And some of the messages aren’t being parsed at all, yet when I put that specific message into the tester in the extractor edit page it parses correctly. The new extractor is firing according to the stats in its Details page.
It feels like some process needs to be restarted to flush the old out and load the new in. Is there a way to force that? Or is there any other reason why the old extractors would still be parsing messages hours after they’ve been deleted?
I stopped the Input. Waited a couple minutes. Verified that no new messages coming in by sitting in a search set to “search in the last 5 min” and refreshing with “Update every 1 sec”. I started the Input back up and this feed started moving again.
I deleted the extractor. The Input now has absolutely no extractors. New messages are coming in that are still being parsed into custom fields. Based on field names, the original two extractors are firing. The third one that replaced them is not. All three of those extractors are deleted, so none of them should be firing, yet new messages are being parsed into custom fields.
I stopped the whole container and brought it back up. Everything restarted fresh. No extractors configured. Watched the events rolling in. They were still getting parsed into custom fields by the ghost extractors that were no longer present in the UI.
Completely deleted the container and its volume and rebuilt the server completely from scratch. Have rebuilt the extractors and they are processing correctly. I haven’t tried to delete any to see if the issue recurs because I actually wanted a functioning server sometime today.