Slow initial api search

I’m testing graylog 5 with opensearch 2* on ubuntu server 22.04
When I query the api (using a token) the first request takes 1-4000ms. If I repeat the request I get a result in 45 (same for subsequent requests ) . If I wait a while (a few minutes) I get the same issue.
The same request on a graylog 4 with elastic has no issues (I’m duplicating data with a output)
Security plugin is disabled in graylog5 conf.

API request:
http://192.168.2.64:9000/api/search/universal/relative?query=source%3AETL03%20AND%20Application%3A"Capteur%20WebDownload"&range=300&decorate=true

reqests to the clusters health page ( curl -XGET --noproxy ‘*’ ‘192.168.2.65:9200/_cluster/health?pretty’
) are not exhibitng the same problem.

Any idea where the bottleneck is and how I could fix this ?
Thanks
Peter

Does the Elastic has more HEAP than the Opensearch? It sounds a bit like a caching issue.

thanks for the quick reply
I will have to check but I think it the other way around. I have more resources for the opensearch cluster. I am short of tme now but I will check tomorrow morning. Not sure why its a caching issue when its just the first request that is slow.
Peter

also check:

  • “unused” RAM on the machines: this is used by the OS for caching, also important
  • size of index sets the messages are part of
  • number of shards for those index sets
  • SSD/HDD Storage type is the same?
1 Like

Hi I think we can rule out graylog as the problem. (I found a solution see below I was documenting my research and thought it might help someone)

I have been running a series of tests against the two clusters (ES and opensearch) and it seems that for the opensearch cluster the inital request is very slow.
if i repeat the same query within a few seconds I then get a quick reply
ie selecting 1000 docs
inital took :1178
5 seconds later took 24
5 seconds later took 19

The ES cluster is consistant around 55
This is the command i used to test the system(ubuntu 22.04)
sleep 60;for i in {1…5}; do curl -XGET --noproxy ‘*’ ‘192.168.2.38:9200/graylog_842/_search?size=1000&pretty’ |grep took; date ; sleep 5; done

The ES index is 1.7gb and the OS index is 256MB which is much smaller so index size is not an issue, each has 4 shards
heap:
Opensearch
curl -XGET --noproxy ‘’ ‘192.168.2.65:9200/_cat/nodes?h=heap.max’
4gb
4gb
4gb
ES 6.8
curl -XGET --noproxy '
’ ‘192.168.2.38:9200/_cat/nodes?h=heap.max’
1007.3mb
1007.3mb
1007.3mb

free memory on OS cluster servers:
free -m
total used free shared buff/cache available
Mem: 7884 5036 118 3 2729 2388
Swap: 3947 106 3841

free memory on ES cluster servers:
free -m
total used free shared buff/cache available
Mem: 7927 2100 211 0 5615 5367
Swap: 4095 41 4054

I noted tbe difference in available ram for the OS cluster
They are VMs so I added a couple of GB of ram to each opensearch cluster node and bingo
The initial querys are now at about 20.
so the balance between heap and available ram is significant.
I thought i would share the solution to help any other admins stuggling with the complexities of ES
Ihe thanks for your help which pointed me in the right direction.
(I probably could have reduced the heap to resolve this but adding ram was simpler in this instance)
Peter

1 Like

Hi Peter,
I’m happy that you solved it! Please go ahead and mark your Post as the solution, so in future others can benefit from it :slight_smile:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.