To be honest: better migrate to Openseach now than later. If your instance is still growing you might loose less data.
How many GB of Logs do you ingest every day with your squid? After my experience it could be worth to throw a little harddrive on the problem and leave the data in the live database, Openseach or Elastic. It will save you from workarounds.
Little goodie for squid users:
It is possible to define an own logformat, which can be refferenced by your logging-config.
logformat graylog_vhost { "server_fqdn": "%{Host}>h", "short_message": "%rm %"ru HTTP/%rv", "timestamp": %ts, "client_source_ip": "%>a", "squid_ip": "%la", "server_ip": "%<a", "response_time": %tr, "size_of_request": %>st, "size_of_reply": %<st, "request_url": "%"ru", "http_status_code": %>Hs, "request_method": "%rm", "squid_request_status": "%Ss", "squid_hierarchy_status": "%Sh", "mime_type": "%mt", "x_forwarded_for": "%{X-Forwarded-For}>h", "referer": "%{Referer}>h", "user_agent": "%"{User-Agent}>h"}
This produces nicely parsed JSON which easily can be ingested into Graylog ![]()