Increase the maximum size of log messages

I’ve tried to reproduce your use case in a local test setup and it worked.

This being said, I would recommend using a different field than “full_message” for your large messages, so that you have full control over the field mapping and can still search the predefined “full_message” field.
You could, for example, rename the respective field in a pipeline rule.

Please refer to these notes:

mapping.json

{
  "template": "graylog_*",
  "mappings" : {
    "message" : {
      "properties" : {
        "full_message": {
          "type": "text",
          "index": false
        }
      }
    }
  }
}

Add custom index template

$ curl -X PUT -d @'mapping.json' 'http://localhost:9200/_template/graylog-custom-mapping?pretty'
{
  "acknowledged" : true
}

Rotate active write index and check mapping of new index (via deflector alias)

$ curl 'http://localhost:9200/graylog_deflector/_mapping?pretty'
{
  "graylog_2" : {
    "mappings" : {
      "message" : {
        "dynamic_templates" : [
          {
            "internal_fields" : {
              "match" : "gl2_*",
              "mapping" : {
                "type" : "keyword"
              }
            }
          },
          {
            "store_generic" : {
              "match" : "*",
              "mapping" : {
                "index" : "not_analyzed"
              }
            }
          }
        ],
        "properties" : {
          "full_message" : {
            "type" : "text",
            "index" : false,
            "analyzer" : "standard"
          },
          "message" : {
            "type" : "text",
            "analyzer" : "standard"
          },
          "source" : {
            "type" : "text",
            "analyzer" : "analyzer_keyword",
            "fielddata" : true
          },
          "streams" : {
            "type" : "keyword"
          },
          "timestamp" : {
            "type" : "date",
            "format" : "yyyy-MM-dd HH:mm:ss.SSS"
          }
        }
      }
    }
  }
}

Sending a big GELF message (~640 KB in the “full_message” field)

# echo -n -e "{ \"version\": \"1.1\", \"short_message\": \"Test\", \"full_message\": \"$(for i in $(seq 0 64000); do echo -n '0123456789';done)\" }\0" | nc 127.0.01 12201

Ingested message in Graylog

For reference, the Graylog setup has been created with Docker and Docker Compose:

docker-compose.yml

version: '2'
services:
  mongo:
    image: mongo:3
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.7
    environment:
      - http.host=0.0.0.0
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    mem_limit: 1g
    ports:
      - 9200:9200
  graylog:
    image: graylog/graylog:2.4.3-1
    environment:
      - GRAYLOG_PASSWORD_SECRET=somepasswordpepper
      # Password: admin
      - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
      - GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
      - GRAYLOG_MESSAGE_JOURNAL_ENABLED=false
    links:
      - mongo
      - elasticsearch
    ports:
      - 9000:9000
      - 12201:12201

Thanks a lot for the answer.

You have created the new index from Graylog interface ? System > Indices > Maintenance > Manually cycle deflector ?

Thx

Yes, exactly (well, System / Indices / Index Set / Maintenance, since it’s Graylog 2.4.3).

I installed a new Graylog configuration from scratch on a digital-ocean droplet.
The logs are saved as multiple messages

It’s an option or something ?

It doesn’t look like you’ve been using a GELF input…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.