Graylog, log problem

I’m sorry but Google is your best friend right now. Trying to use things without a basic understanding of IT concepts and troubleshooting is like driving a F1 car without ever having driven before, it’s just not going to work, and frankly I don’t think anyone here on the forum has the time (or patience, in my case) to hold your hand through a very simple operation.

Yes, I’m an asshole.

if your database is DEAD (red), and GL can’t write out data, what do you think, what is expect behavior?

So many times we told check your elasticsearch.
MAKE IT’S STATUS TO GREEN.
after that check your other problems.

I did not find it in the / etc / elasticsearch Java configuration file, I went into init.d and set it like this:

I went in init.d and set it up
    PATH=/bin:/usr/bin:/sbin:/usr/sbin
NAME=elasticsearch
DESC="Elasticsearch Server"
DEFAULT=/etc/default/$NAME

if [ `id -u` -ne 0 ]; then
        echo "You need root privileges to run this script"
        exit 1
fi


. /lib/lsb/init-functions

if [ -r /etc/default/rcS ]; then
        . /etc/default/rcS
fi


# The following variables can be overwritten in $DEFAULT

# Run Elasticsearch as this user ID and group ID
ES_USER=elasticsearch
ES_GROUP=elasticsearch

# Directory where the Elasticsearch binary distribution resides
ES_HOME=/usr/share/$NAME

# Heap size defaults to 256m min, 1g max
# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g
ES_HEAP_SIZE="-Xms2G -Xmx2G"

# Heap new generation
#ES_HEAP_NEWSIZE=

# max direct memory
#ES_DIRECT_SIZE=

# Additional Java OPTS
ES_JAVA_OPTS="-Xms2G -Xmx2G" ./bin/elasticsearch

# Maximum number of open files
MAX_OPEN_FILES=65536

# Maximum amount of locked memory
#MAX_LOCKED_MEMORY=

# Elasticsearch log directory
LOG_DIR=/var/log/$NAME

# Elasticsearch data directory
DATA_DIR=/var/lib/$NAME

# Elasticsearch configuration directory
CONF_DIR=/etc/$NAME

# Maximum number of VMA (Virtual Memory Areas) a process can own
MAX_MAP_COUNT=262144

# Path to the GC log file
#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log

# Elasticsearch PID file directory
PID_DIR="/var/run/elasticsearch"

# End of variables that can be overwritten in $DEFAULT

# overwrite settings from default file
if [ -f "$DEFAULT" ]; then
        . "$DEFAULT"
fi

# CONF_FILE setting was removed
if [ ! -z "$CONF_FILE" ]; then
    echo "CONF_FILE setting is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed."
    exit 1

# Define other required variables
PID_FILE="$PID_DIR/$NAME.pid"
DAEMON=$ES_HOME/bin/elasticsearch
DAEMON_OPTS="-d -p $PID_FILE --default.path.home=$ES_HOME --default.path.logs=$LOG_DIR --default.path.data=$DATA_DIR --default.path.conf=$CONF_DIR"

export ES_HEAP_SIZE
export ES_HEAP_NEWSIZE
export ES_DIRECT_SIZE
export ES_JAVA_OPTS
export ES_GC_LOG_FILE
export JAVA_HOME
export ES_INCLUDE

# Check DAEMON exists
test -x $DAEMON || exit 0

checkJava() {
    if [ -x "$JAVA_HOME/bin/java" ]; then
            JAVA="$JAVA_HOME/bin/java"
    else
            JAVA=`which java`
    fi

    if [ ! -x "$JAVA" ]; then
            echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
            exit 1
    fi
}

case "$1" in
  start)
    checkJava

    if [ -n "$MAX_LOCKED_MEMORY" -a -z "$ES_HEAP_SIZE" ]; then
            log_failure_msg "MAX_LOCKED_MEMORY is set - ES_HEAP_SIZE must also be set"
            exit 1
    fi

it started to work with the fact that I continue to write 1GB and should probably change after these settings on 2GB?
I am wondering about this Process buffer and Output buffer as there were problems they were at 98-100%, and now is 0%, in addition the CPU load dropped from 90% to 35% of the frame, more about 11GB
How to explain it? what I did help or some coincidence?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.