When I run the docker compose, it brings up the container and then after a few seconds, it stops. I checked the logs and I have attached the logs here. I am getting the following error. I tried to research on it but couldn’t find anything definitive.
veth2963bc3: Failed to query device driver: No such device
I have used the Graylog docker container up until the 4.6 version without any issues.
Looks like the interface being created/deleted (despite being named eth0) – can you check the system logs as well (through journalctl -f or tail -f /var/log/syslog)?
I believe this is what you are experiencing.
Apr 11 11:35:25 Graylog kernel: [ 879.293038] br-590170de7e06: port 3(veth65bbc04) entered blocking state
Apr 11 11:35:25 Graylog kernel: [ 879.293063] br-590170de7e06: port 3(veth65bbc04) entered disabled state
Apr 11 11:35:25 Graylog kernel: [ 879.293360] device veth65bbc04 entered promiscuous mode
Apr 11 11:35:30 Graylog kernel: [ 884.922697] br-590170de7e06: port 3(veth65bbc04) entered disabled state
Apr 11 11:35:30 Graylog kernel: [ 884.922824] veth2963bc3: renamed from eth0
Apr 11 11:35:30 Graylog kernel: [ 884.937797] br-590170de7e06: port 3(veth65bbc04) entered disabled state
I have seen a lot but not this. What I get from those logs is your Ethernet port for some reason was renamed (i.e, eth0: renamed from veth2963bc3) then was renamed back (i.e, veth2963bc3: renamed from eth0). you may want to look at your network file.
Here is an example of mine, I dont use DHCP.
I would first look at your network port name using ip add.
root@ansible:/usr/share/opensearch/bin# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:15:5d:b8:5a:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:feb8:5a00/64 scope link
valid_lft forever preferred_lft forever
root@ansible:/usr/share/opensearch/bin#
Then adjust your network file to match.
# This is the network config written by 'subiquity'
network:
ethernets:
eth0:
addresses: [192.168.1.100/24]
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8,8.8.4.4]
version: 2
I don’t have any interface named “eth0”. The one I have is “ens18”. And also there are other interfaces that are created by the docker. The weird part is, the interface which it is trying to rename doesn’t even exist. Here is my IP address output.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 26:00:22:99:af:e9 brd ff:ff:ff:ff:ff:ff
altname enp0s18
inet <My Public IP> brd 69.158.225.255 scope global ens18
valid_lft forever preferred_lft forever
inet6 fe80::2400:22ff:fe99:afe9/64 scope link
valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether da:02:bc:91:2f:51 brd ff:ff:ff:ff:ff:ff
altname enp0s19
inet 172.16.0.36/24 brd 172.16.0.255 scope global ens19
valid_lft forever preferred_lft forever
inet6 fe80::d802:bcff:fe91:2f51/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ca:87:8b:7f brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: br-a305c33be871: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:73:94:1c:28 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-a305c33be871
valid_lft forever preferred_lft forever
inet6 fe80::42:73ff:fe94:1c28/64 scope link
valid_lft forever preferred_lft forever
7: veth2eb3964@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a305c33be871 state UP group default
link/ether 42:e6:05:63:b3:61 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::40e6:5ff:fe63:b361/64 scope link
valid_lft forever preferred_lft forever
9: vethc32cd2e@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a305c33be871 state UP group default
link/ether 9e:45:1a:06:17:15 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9c45:1aff:fe06:1715/64 scope link
valid_lft forever preferred_lft forever
37: vethedf9c6c@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-a305c33be871 state UP group default
link/ether 66:a2:76:4a:4d:50 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::64a2:76ff:fe4a:4d50/64 scope link
valid_lft forever preferred_lft forever
I see your interface is ens18 and for some reason you have another one called ens19. Not only that I dont see the other interface called veth65bbc04 from the logs. To be honest, I dont think this is a graylog issue, not sure what you have going on with your network. By chance are you using DHCP?
Yes, I have ens19 as a second interface for the management network. And it also has a static IP address assigned.
And you are right about the veth65bbc04 interface. It doesn’t exist. I will try deploying it on a fresh install just to rule out any issues with the current system. As I said earlier, this is just a test environment.
@gsmith, what would you suggest to use as the OS? Debian or Ubuntu?
I have done digging this whole day and here is what I found. It is a MongoDB requirement but people configuring Graylog might encounter this, so I am just putting the information here.
So Graylog 5.0 requires MongoDB version 5 at least. And the issue I have encountered is MongoDB 5 is not supported on the CPUs that don’t have AVX in it. And this is exactly my situation.
Since MongoDB doesn’t come up perfectly, Graylog doesn’t start. So either you need a CPU with AVX in it or else for now use an older version of Graylog with a downgraded version of MongoDB.