Packetbeat Docker Host

Hi, guys
I have a question
I have 10 containers published on the Internet on of my Docker host and I must to check the traffic of all them any containers with Packetbeat .
But I have no idea to run and config .
Thanks for any Guide and help.

Hello @bahram

Are you using Graylog sidecar? or just a Packetbeat instance?
Could you explain in greater detail about this environment like OS used, any configuration you have now, etc…

Have you seen this?

Hi, gsmith

Thanks a lot for response me
I hava a Docker Host (Operating System: Debian GNU/Linux 10 (buster) AND run Docker version 20.10.5 with 20 container for web services
I need all traffic containers captuer on docker host
But, I install and configured packetbeat on docker host the Docker 0 interface, but in practice I did not see any traffic from the containers. Of course, I also used Graylog sidecar.
How can I see the traffic of each container with Packetbeat?

Thank you for your guidance and support

Hello,

I assume these Docker containers are using a bridge to eth0?

2: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:a1:12:58:d6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:a1ff:fe12:58d6/64 scope link
       valid_lft forever preferred_lft forever

The docker0 network interface
The basics of the docker0 network interface, which Packetbeat uses to listen for container traffic. A single interface must be specified in the Packetbeat configuration file so it can listen for incoming and outgoing packets and send them to Graylog.

When a Docker container starts, a veth* network interface is created for it. This allows the application running inside the container to communicate with the rest of the world through that interface.
Docker, by default, creates a docker0 network interface that acts like a network bridge to other veth* interfaces. Therefore, if you can directly access the docker0 network interface, you can monitor the traffic from all veth* interfaces automatically and see the incoming and outgoing traffic of every Docker container.

If Docker is already installed and ready to use, start Packetbeat in its own container. It can also be installed directly on the host, but with access to Docker, it’s better to install it directly in the Docker container.

The Docker image for Packetbeat is already written, so it’s simple to install it with one command:

# docker run --name packetbeat -d --net=host -d something/packetbeat app:start

As for GL sidecar I’m not sure…

EDIT: Out of curiosity what are the configuration made with GL sidecar / Packetbeat?

Hope that helps

Hello,

I was able to get around to “labbing” this issue out.

Since this is my first time using Graylog sidecar with Packetbeat I had the same issue, there was no messages/logs being received from my Docker host.

The steps taken to get this to work was unique and I’m not sure what exactly happened since this is fairly new me.

Following is what I execute and it worked.

Steps

  • Download Packetbeat package.
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-8.2.1-amd64.deb
  • For testing the connection, I configured & started Packetbeat by itself.
Systemctl start Packetbeat
  • Once data/logs were shown on the Web UI I stopped Packetbeat service.

  • Now I configured Graylog-sidecar to add Packetbeat to collector_binaries_whitelist: as shown below.

root@ansible:/var/lib/graylog-sidecar/generated# cat /etc/graylog/sidecar/sidecar.yml  | egrep -v "^\s*(#|$)"
server_url: "https://8.8.8.8:9000/api/"
server_api_token: "647na8fg66oathdp4sa0869uv85gj5d5d2p7pvji4fkkeqh9n3j"
node_id: "file:/etc/graylog/sidecar/node-id"
node_name: "ansible"
update_interval: 10
tls_skip_verify: true
send_status: true
log_path: "/var/log/graylog-sidecar"
log_rotate_max_file_size: "10MiB"
log_rotate_keep_files: 10
collector_binaries_whitelist:
   - "/usr/bin/packetbeat"  <--- HERE
   - "/usr/share/filebeat/bin/filebeat"

Web UI configuration

  • Created a collector template for Packetbeat.
packetbeat.interfaces.device: any
packetbeat.interfaces.internal_networks:
  - private
packetbeat.flows:
  timeout: 30s
  period: 10s
packetbeat.protocols:
- type: icmp
  enabled: true
- type: amqp
  ports: [5672]
- type: cassandra
  ports: [9042]
- type: dhcpv4
  ports: [67, 68]
- type: dns
  ports: [53]
- type: http
  ports: [80, 8080, 8000, 5000, 8002]
- type: memcache
  ports: [11211]
- type: mysql
  ports: [3306,3307]
- type: pgsql
  ports: [5432]
- type: redis
  ports: [6379]
- type: thrift
  ports: [9090]
- type: mongodb
  ports: [27017]
- type: nfs
  ports: [2049]
- type: tls
  ports:
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL
    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch
- type: sip
  ports: [5060]
  _source.enabled: true
output.logstash:
  hosts: ["8.8.8.8:5066"]
processors:
  - # Add forwarded to tags when processing data from a network tap or mirror.
    if.contains.tags: forwarded
    then:
      - drop_fields:
          fields: [host]
    else:
      - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - detect_mime_type:
      field: http.request.body.content
      target: http.request.mime_type
  - detect_mime_type:
      field: http.response.body.content
      target: http.response.mime_type
logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/packetbeat
  name: packetbeat
  keepfiles: 7
  permissions: 0640
  • Created a configuration called Packetbeat.

  • Go to Administration and add the configurations made to node.

  • Adjust firewall for beat port, in my case I’m using Beats input port 5066

  • Start Packetbeat.

Results:

And just so its clear here is the Packetbeat service status.

root@ansible:~# systemctl status packetbeat
â—Ź packetbeat.service - Packetbeat analyzes network traffic and sends the data to Elasticsearch.
     Loaded: loaded (/lib/systemd/system/packetbeat.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: https://www.elastic.co/beats/packetbeat

May 24 21:03:13 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:13.424-0500","log.logger":"monitoring","log.origin":{"file.>
May 24 21:03:30 ansible.systemd[1]: Stopping Packetbeat analyzes network traffic and sends the data to Elasticsearch....
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.159-0500","log.origin":{"file.name":"beater/packetbeat.g>
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.189-0500","log.origin":{"file.name":"flows/util.go","fil>
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.625-0500","log.logger":"monitoring","log.origin":{"file.>
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.625-0500","log.logger":"monitoring","log.origin":{"file.>
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.625-0500","log.logger":"monitoring","log.origin":{"file.>
May 24 21:03:30 ansible. packetbeat[3097350]: {"log.level":"info","@timestamp":"2022-05-24T21:03:30.625-0500","log.origin":{"file.name":"instance/beat.go",">
May 24 21:03:30 ansible systemd[1]: packetbeat.service: Succeeded.
May 24 21:03:30 ansiblesystemd[1]: Stopped Packetbeat analyzes network traffic and sends the data to Elasticsearch..

I followed these instructions below but instead of Rsyslog, I replaced it with Packetbeat. This documentation is for a different/older version but it worked for the configurations needed to make this work

Some more testing is probably needed but I was able to get it to work.

1 Like

Hi, gsmith

Thank you very much for your useful and valuable answer
I installed and configured the packetbeat on Docker Host.

jamal.mahmoudi@debian10:~$ sudo packetbeat test config
Config OK
jamal.mahmoudi@debian10:~$ sudo packetbeat test output
logstash: 192.168.33.200:5050...
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.33.200
    dial up... OK
  TLS... WARN secure connection disabled
  talk to server... OK
jamal.mahmoudi@debian10:~$
...................................................................
my config packetbeat.yml

jamal.mahmoudi@debian10:~$ cat /etc/packetbeat/packetbeat.yml
#################### Packetbeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The packetbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/packetbeat/index.html

# =============================== Network device ===============================

# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: any

#packetbeat.interfaces.type: af_packet
#packetbeat.interfaces.buffer_size_mb: 100
#packetbeat.interfaces.snaplen: 65535
#packetbeat.interfaces.type: af_packet
# The network CIDR blocks that are considered "internal" networks for
# the purpose of network perimeter boundary classification. The valid
# values for internal_networks are the same as those that can be used
# with processor network conditions.
#
# For a list of available values see:
# https://www.elastic.co/guide/en/beats/packetbeat/current/defining-processors.html#condition-network
packetbeat.interfaces.internal_networks:
  - private

# =================================== Flows ====================================

# Set `enabled: false` or comment out all options to disable flows reporting.
packetbeat.flows:
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s

# =========================== Transaction protocols ============================

packetbeat.protocols:
- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. The default is true.
  enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  ports: [5672]

- type: cassandra
  # Configure the ports where to listen for Cassandra traffic. You can disable
  # the Cassandra protocol by commenting out the list of ports.
  ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 443, 8002, 7090, 3000, 30009, 30017, 30018, 30019, 30020, 10050]

  include_body_for: ["application/json","text/html","application/rest+xml","text/xml"]
  send_all_headers: true
  real_ip_header: "X-Forwarded-For"
  send_request: false
  send_response: false
  keep_null: true
- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  ports:
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL
    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch

- type: sip
  # Configure the ports where to listen for SIP traffic. You can disable
  # the SIP protocol by commenting out the list of ports.
  ports: [5060]

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# A list of tags to include in every event. In the default configuration file
# the forwarded tag causes Packetbeat to not add any host fields. If you are
# monitoring a network tap or mirror port then add the forwarded tag.
#tags: [forwarded]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.33.200:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.33.200:5050"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:
  - # Add forwarded to tags when processing data from a network tap or mirror.
    if.contains.tags: forwarded
    then:
      - drop_fields:
          fields: [host]
    else:
      - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - detect_mime_type:
      field: http.request.body.content
      target: http.request.mime_type
  - detect_mime_type:
      field: http.response.body.content
      target: http.response.mime_type

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Packetbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Packetbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the packetbeat.
#instrumentation:
    # Set to true to enable instrumentation of packetbeat.
    #enabled: false

    # Environment in which packetbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
.....................................................................................................................
my oncfig  graylog-sidecar 
jamal.mahmoudi@debian10:~$ sudo vim /etc/packetbeat/packetbeat.yml

[7]+  Stopped                 sudo vim /etc/packetbeat/packetbeat.yml
jamal.mahmoudi@eniac-bank.local@ofogh-epm:~$ sudo cat /etc/graylog/sidecar/sidecar.yml
jamal.mahmoudi@eniac-bank.local@ofogh-epm:~$ sudo cat /etc/graylog/sidecar/sidecar.yml
# The URL to the Graylog server API.
#server_url: "http://127.0.0.1:9000/api/"
server_url: "http://192.168.33.200:9000/api/"

# The API token to use to authenticate against the Graylog server API.
# This field is mandatory
server_api_token: "....................................................."

# The node ID of the sidecar. This can be a path to a file or an ID string.
# If set to a file and the file doesn't exist, the sidecar will generate an
# unique ID and writes it to the configured path.
#
# Example file path: "file:/etc/graylog/sidecar/node-id"
# Example ID string: "6033137e-d56b-47fc-9762-cd699c11a5a9"
#
# ATTENTION: Every sidecar instance needs a unique ID!
#
#node_id: "file:/etc/graylog/sidecar/node-id"

# The node name of the sidecar. If this is empty, the sidecar will use the
# hostname of the host it is running on.
node_name: "OFOGH_Packetbat_server"

# The update interval in seconds. This configures how often the sidecar will
# contact the Graylog server for keep-alive and configuration update requests.
#update_interval: 10

# This configures if the sidecar should skip the verification of TLS connections.
# Default: false
#tls_skip_verify: false

# This enables/disables the transmission of detailed sidecar information like
# collector statues, metrics and log file lists. It can be disabled to reduce
# load on the Graylog server if needed. (disables some features in the server UI)
#send_status: true

# A list of directories to scan for log files. The sidecar will scan each
# directory for log files and submits them to the server on each update.
#
# Example:
#     list_log_files:
#       - "/var/log/nginx"
#       - "/opt/app/logs"
#
# Default: empty list
#list_log_files: []

# Directory where the sidecar stores internal data.
#cache_path: "/var/cache/graylog-sidecar"

# Directory where the sidecar stores logs for collectors and the sidecar itself.
#log_path: "/var/log/graylog-sidecar"

# The maximum size of the log file before it gets rotated.
#log_rotate_max_file_size: "10MiB"

# The maximum number of old log files to retain.
#log_rotate_keep_files: 10

# Directory where the sidecar generates configurations for collectors.
#collector_configuration_directory: "/var/lib/graylog-sidecar/generated"

# A list of binaries which are allowed to be executed by the Sidecar. An empty list disables the whitelist feature.
# Wildcards can be used, for a full pattern description see https://golang.org/pkg/path/filepath/#Match
# Example:
#     collector_binaries_whitelist:
#       - "/usr/bin/filebeat"
#       - "/opt/collectors/*"
#
# Example disable whitelisting:
#     collector_binaries_whitelist: []
#

collector_binaries_whitelist: []
backends:
    - name: packetbeat
      enabled: true
      binary_path: /usr/share/packetbeat/bin/packetbeat
      configuration_path: /etc/packetbeat/packetbeat.yml
    - name: filebeat
      enabled: true
      binary_path: /usr/share/filebeat/bin/filebeat
      configuration_path: /etc/filebeat/filebeat.yml
    - name: auditbeat
      enabled: true
      binary_path: /usr/share/auditbeat/bin/auditbeat
      configuration_path: /etc/auditbeat/auditbeat.yml

      ...................................................................................
      graylog collector configuration 

      # Needed for Graylog
fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

packetbeat.interfaces:
 device: 0
packetbeat.protocols:
 dns:
   ports: [53]
 http:
   ports: [80, 8080, 8000, 443, 8002, 7090, 3000, 30009, 30017, 30018, 30019, 30020, 10050]
  include_body_for: ["application/json","text/html","application/rest+xml","text/xml"]
  send_all_headers: true
  real_ip_header: "X-Forwarded-For"
  send_request: false
  send_response: false
  keep_null: true

output.logstash:
   hosts: ["192.168.33.200:5050"]
path:
  data: /var/lib/graylog-sidecar/collectors/packetbeat/data
  logs: /var/lib/graylog-sidecar/collectors/packetbeat/log
  ----------------------------------------------------------------------------------------------
  But I do not receive any  packet  of the active interfaces on  Docker Host
-------------------------------------------------------------------------------------------------
  I need captuer all traffic my containers 
  Which interface do you think I should choose? 

  jamal.mahmoudi@debian10:~$  packetbeat devices
0: br-a8d7d60a4ff0 (No description available) (172.17.25.1 fe80::42:9fff:fe59:c781)
1: vethb1b27e0 (No description available) (fe80::945b:bff:fee0:2ee9)
2: veth5980dfa (No description available) (fe80::10aa:bdff:fecd:520f)
3: veth63e57dd (No description available) (fe80::a4f6:bff:fe1b:4b29)
4: veth73ca9ad (No description available) (fe80::e498:a7ff:fe4f:6969)
5: docker0 (No description available) (172.17.0.1 fe80::42:5fff:fe06:9a46)
6: veth0892eab (No description available) (fe80::6045:2aff:fe8f:d421)
7: vethdf01c1c (No description available) (fe80::3c3e:1bff:fe8f:15aa)
8: veth33fd9b2 (No description available) (fe80::aca4:92ff:fe5f:23f2)
9: vetha114dd2 (No description available) (fe80::4400:56ff:feb8:fdb9)
10: veth38c9af2 (No description available) (fe80::5c75:20ff:fee6:5e07)
11: veth56f89b2 (No description available) (fe80::fc44:4ff:fe73:8817)
12: br-a4b87f57add4 (No description available) (172.17.26.1 fe80::42:48ff:feb7:6fc)
13: veth96116a5 (No description available) (fe80::c5b:c0ff:fe5b:9179)
14: veth4602c5e (No description available) (fe80::e087:1ff:fe78:2e67)
15: vetheac5e6b (No description available) (fe80::ccc9:b7ff:fe52:a931)
16: vethe348a6f (No description available) (fe80::78bc:a8ff:fe41:1d31)
17: veth022a6f8 (No description available) (fe80::1060:d2ff:fee7:d44d)
18: veth3d5c8a8 (No description available) (fe80::4031:d4ff:fe1d:81de)
19: veth3a9af14 (No description available) (fe80::18d3:71ff:fe23:49a9)
20: veth90ff65d (No description available) (fe80::8482:deff:fe99:c0c0)
21: veth9360a67 (No description available) (fe80::2c6d:6bff:fe1e:9195)
22: veth03a8b69 (No description available) (fe80::741d:77ff:feb6:3978)
23: veth05e6a77 (No description available) (fe80::3415:adff:fe6f:4c29)
24: veth5bd5d78 (No description available) (fe80::b0ad:dff:fe31:d021)
25: vethd2a082a (No description available) (fe80::6027:75ff:fe2c:e1ff)
26: veth3c5c155 (No description available) (fe80::d8d0:beff:fef9:7425)
27: veth87b186c (No description available) (fe80::ec60:cbff:fe23:30ad)
28: ens192 (No description available) (192.168.33.80 fe80::250:56ff:fe8c:c2e4)
29: vethb3d450f (No description available) (fe80::dcf3:30ff:fe8a:a5e1)
30: veth285c470 (No description available) (fe80::60bb:95ff:fe71:e604)
31: veth96f0633 (No description available) (fe80::9cde:4cff:fe30:9fbe)
32: vethbed638e (No description available) (fe80::acad:29ff:fe65:1359)
33: vethd0c648f (No description available) (fe80::dc4d:2aff:feb3:f862)
34: vethf1f1236 (No description available) (fe80::6413:19ff:fe0a:19d4)
35: br-402ba6af1598 (No description available) (172.17.3.1 fe80::42:ecff:fe7c:5fd1)
36: veth65a4774 (No description available) (fe80::e8fe:4bff:fe9a:ee53)
37: veth59a5700 (No description available) (fe80::5805:c0ff:fe4f:d439)
38: vethdbb7526 (No description available) (fe80::d8c0:cff:fea7:42b2)
39: vethcb28761 (No description available) (fe80::d090:65ff:fe50:13c5)
40: vethd93767c (No description available) (fe80::84f1:97ff:fed7:2426)
41: veth1090375 (No description available) (fe80::3488:5fff:fe73:e3b6)
42: any (Pseudo-device that captures on all interfaces) (Not assigned ip address)
43: lo (No description available) (127.0.0.1 ::1)
44: br-883a4f991877 (No description available) (172.17.21.1 fe80::42:b7ff:fe73:67e7)
45: nflog (Linux netfilter log (NFLOG) interface) (Not assigned ip address)
46: nfqueue (Linux netfilter queue (NFQUEUE) interface) (Not assigned ip address)

Hello,

That’s you container IP Address, which is probably Bridge ( i.e Docker0) to your Localhost IP address ( Let’s say 192.168.1.100)
Depending on any routing, I would use that address.

As for those logs shown above might want to take a look at this

1 Like

Hi,
cool & perfect
Thanks so much gsmith

No problem :+1: Anytime @bahram

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.