Graylog Kubernetes file lock on volume mount

Hello,

I’ve been going through the documentation on DockerHub and I got everything converted for k8s. However, we I deploy my Stateful Set and view the logs from my container, I’m getting the following is repeating error message:

Caused by: kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/share/graylog/data/journal. A Kafka instance in another process or thread is using this directory.

Here’s a copy of my K8s manifest:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: graylog
  namespace: graylog
  labels:
    component: graylog
    role: master
spec:
  serviceName: graylog-master
  replicas: 1
  selector:
    matchLabels:
      component: graylog
      role: master
  template:
    metadata:
      labels:
        component: graylog
        role: master
    spec:
      serviceAccountName: graylog
      containers:
      - name: graylog
        image: graylog2/graylog:2.4.3-1
        ports:
        - containerPort: 9000
          name: http
          protocol: TCP
        volumeMounts:
        - name: graylog-journal
          mountPath: /usr/share/graylog/data/journal
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: "GRAYLOG_ELASTICSEARCH_HOSTS"
          value: "http://elasticsearch-graylog.svc.cluster.local:9200/"
        - name: "GRAYLOT_ELASTICSEARCH_DISCOVERY_ENABLED"
          value: "true"
        - name: "GRAYLOG_MONGODB_URI"
          value: "mongodb://mongodb-graylog-mongodb.graylog.svc.cluster.local:27017/graylog"
        - name: "GRAYLOG_PASSWORD_SECRET"
          valueFrom:
            secretKeyRef:
              name: graylog-secrets
              key: gl_passwd_secret
        - name: "GRAYLOG_ROOT_PASSWORD_SHA2"
          valueFrom:
            secretKeyRef:
              name: graylog-secrets
              key: gl_root_passwd
        - name: "GRAYLOG_REST_TRANSPORT_URI"
          value: "http://0.0.0.0:9000/api/"
        - name: "GRAYLOG_REST_LISTEN_URI"
          value: "http://0.0.0.0:9000/api/"
        - name: "GRAYLOG_WEB_LISTEN_URI"
          value: "http://0.0.0.0:9000"
        - name: "GRAYLOG_ROOT_TIMEZONE"
          value: "GMT"
  volumeClaimTemplates:
  - metadata:
      name: graylog-journal
      namespace: graylog
    spec:
      accessModes: [ 'ReadWriteOnce' ]
      storageClassName: standard
      resources:
        requests:
          storage: "10Gi"

I have also confirmed that the stateful set is creating a new PVC and that it is mounted. I just don’t understand why/how Kafka can’t acquire this lock on a new container.

maybe this little snipped in the FAQ can help:

http://docs.graylog.org/en/2.4/pages/faq.html#dedicated-partition-for-the-journal

That is strangely interesting. So since we are using Dynamic volume provision with K8s in AWS, there is no easy way to access with EBS volume, remove the lost+found file, and restart Graylog. I think we are going to have to go back to the drawing board with this.

You could use a subdirectory on that partition/volume which doesn’t have a lost+found directory by default. Alternatively, use a different file system for the partition/volume such as XFS.

Unfortunately, the issue is the EBS volume is created dynamically via the launching of the Graylog pod.

https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#using-dynamic-provisioning

AFAIK, there isn’t a way of creating directories and choosing partition types that I’m aware of.

Graylog will create any subdirectories for the path given in message_journal_dir if the system user running Graylog has sufficient privileges.

Ah, that’s good call. I was able to create another sub-directory and Graylog is able to start up. Thanks for the info!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.