Howdy Graylog People,
Chris Black here. Senior Solutions Engineer at Graylog. I’m very pleased to announce the first installment in a series of content that will be released under the “Graylog Labs” moniker. The content will be varied, from videos like the one I’m posting today, to blog posts and actual Graylog content packs with dashboards, rules, saved searches, pipeline rules, etc.
This content represents an effort by the Graylog staff to share with the larger community, some of the tribal knowledge that we have acquired inside Graylog. It will also act to enhance the other sources of information we offer, such as the official marketing and documentation. We will try to fill in the gaps and provide examples of concepts that may be unclear. If there is a topic you would like covered, please feel free to ask away.
Full disclosure: The content released will be released like any other community content. It comes with no offer of support and is delivered on an as-is basis. It is NOT Graylog supported content. We will endeavor to update anything that needs it, but it will be on a “best effort” basis by the individual contributor.
With all that said, I give you,
I wish we had a announcement Category so this wont get lost in time. @dscryber
We are going to be posting a lot more Graylog Labs content in the future. Would a “Graylog Labs” category be helpful for these kinds of things @gsmith @dscryber?
Actually that sound great, and Thank you.
hi @chris.black ,
thank you very much for that video. It helps a lot!
How far is the number of accounted log size in Graylog (also relevant for the licence) away from the size ingested into Graylog? Or can I take them 1:1 as an orientation?
This is what I wrote down in my notes:
How big needs the montodb-server to be in case I have one spare with 50-100GB?
Hey, Joe, Yes. I’ll set up a category called Graylog Labs
@ihe, are you referring to how much data will be stored to Opensearch? The ingestion is measured just before insertion into Opensearch, but the actual bytes stored varies, based on the type of data you are collecting and how it is organized in your indices.
For calculating storage requirements, take the number of days you wish to retain the data, multiply by the amount of data you are ingesting, and then multiply that by 1.3. This allows for headroom in Opensearch.
Ingest Rate (GB) * Days Retained * 1.3
A word of warning, you need to allow for LOTS of slack space in OS/ES. If you haven’t read the thread about watermarks on the board, go find it. Or just google it for a good explanation from Elastic.
For purposes of calculation, we recommend you always leave at least 25% of disk space free. The watermarks kick in at 85% by default, so that gives you a little space if storage gets close to the limits. The way I like to think about it is that 75% used is 100% full.
Thanks Chris for the announcement to go along with the new Graylog Labs category I’ve set up in the community. I’ll post it to the category and in the banner (tomorrow— I want the Graylog v5 news to resonate alone for awhile)
We are going to cross post with a link to the gl central category for a while, just to draw attention to it.
Thank you @chris.black for your answer. My question was not 100% precise I think:
How much difference is there between the ingested log size and the accounted log size? I guess some technical fields like gl2_accounted_messagesize and streamIDs do not count?
Oh, sorry @ihe . There are several fields that do not count towards ingestion. all the gl2_ prefixed fields are in this category. They do take up storage space, but the amount is negligible and is covered by the 1.3 multiplier I mentioned in the calculation.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.