MongoDB connections constantly resetting

I hate to push my luck, but this board was so helpful the other day, I think I will try again with another question/concern I have. Thank you in advance for any information on this.

My issue is pretty simple. Things work fine, but the logs for my MongoDB are just constantly flooded with resets. It’s like Graylog is just constantly reconnecting. These happen almost exactly every 10s.

I’m running both MongoDB and Graylog in Kubernetes:

Kubernetes: k3s v1.31.2
MongoDB: 6.0.8 (Operator installed via helm, then CRD created for cluster)
Graylog: 6.1.3 (StatefulSet managed via manifest file and kubectl)

The connection string I’m currently using is:

mongodb_uri = mongodb://graylog:****@mongodb-0.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-svc.mongodb.svc.cluster.local:27017/graylog?replicaSet=mongodb&ssl=false

The logs I see on all three mongodb nodes:

{"t":{"$date":"2024-11-25T09:20:06.405+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154607","msg":"client metadata","attr":{"remote":"192.168.125.108:39084","client":"conn154607","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:06.405+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154608","msg":"client metadata","attr":{"remote":"192.168.125.108:39086","client":"conn154608","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:06.416+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154609","msg":"client metadata","attr":{"remote":"192.168.125.108:39088","client":"conn154609","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:06.437+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn154607","msg":"Interrupted operation as its client disconnected","attr":{"opId":42854753}}
{"t":{"$date":"2024-11-25T09:20:16.603+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154610","msg":"client metadata","attr":{"remote":"192.168.125.108:50270","client":"conn154610","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:16.604+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154611","msg":"client metadata","attr":{"remote":"192.168.125.108:50280","client":"conn154611","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:16.605+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154612","msg":"client metadata","attr":{"remote":"192.168.125.108:50294","client":"conn154612","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:16.626+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn154611","msg":"Interrupted operation as its client disconnected","attr":{"opId":42855742}}
{"t":{"$date":"2024-11-25T09:20:26.777+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154613","msg":"client metadata","attr":{"remote":"192.168.125.108:51584","client":"conn154613","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:26.778+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154614","msg":"client metadata","attr":{"remote":"192.168.125.108:51574","client":"conn154614","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:26.779+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn154615","msg":"client metadata","attr":{"remote":"192.168.125.108:51596","client":"conn154615","doc":{"driver":{"name":"mongo-go-driver","version":"v1.7.2+prerelease"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.20.5","application":{"name":"MongoDB Automation Agent v12.0.24.7719 (git: de43347cefcf98c287c7d00c8c6acd2dc85f0370)"}}}}
{"t":{"$date":"2024-11-25T09:20:26.820+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn154613","msg":"Interrupted operation as its client disconnected","attr":{"opId":42856744}}

Any thoughts on what’s happening here?

Hello @danielgoepp,

Hard to say, are there resources where these containers are hosted in contention at all?

Thanks for the reply. No resource issues that I can see. Tons of head room on all fronts. It’s odd. I have a really simple setup, and it has done this from day one. I think I will scale down to just one node for mongodb and see if that helps. Perhaps it is the service cluster logic that is causing the connection to jump around the nodes.