Access to suricata metrics on InfluxDB

I’m attempting to send data from Suricata IDS to Influx using telegraf as agent. I have no problems with the Suricata-Telegraf-InfluxDB connection since I manage to synchronize metrics like Kernel, CPU, etc … all enabled in the telegraf.conf file.

Following the recommendations of the official documentation, I have configured suricata as input (Plugin directory | Telegraf 1.23 Documentation):

# Suricata stats and alerts plugin
[[inputs.suricata]]
  ## Data sink for Suricata stats and alerts logs
  # This is expected to be a filename of a
  # unix socket to be created for listening.
  source = "/tmp/suricata-stats.sock"

  # Delimiter for flattening field keys, e.g. subitem "alert" of "detect"
  # becomes "detect_alert" when delimiter is "_".
  delimiter = "_"

  ## Detect alert logs
  alerts = false

To configure Telegraf I selected docker as the connector. The remaining options available are Systems, Kubernetes, Nginx and Redis. Finally, Influx AutheN token is added and the Telegraf container is restarted, resulting a connection Influx-Telegraf found. Now, I can see the metrics associated with all the inputs, except those related to Suricata:

For instance.:

  • CPU:
# Read metrics about cpu usage
[[inputs.cpu]]
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics
  collect_cpu_time = false
  ## If true, compute and report the sum of all non-idle CPU states
  report_active = false
  • Disk:
# Read metrics about disk usage by mount point
[[inputs.disk]]
  ## By default stats will be gathered for all mount points.
  ## Set mount_points will restrict the stats to only the specified mount points.
  # mount_points = ["/"]

  ## Ignore mount points by filesystem type.
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

I can’t find a way to see the set of metrics that Suricata defines in the documentation: telegraf/plugins/inputs/suricata at release-1.23 · influxdata/telegraf · GitHub

Hi,

I can see the metrics associated with all the inputs, except those related to Suricata:

My first step would be to check your Telegraf logs and see what they show. It is very likely that something is not getting collected or going wrong during collection if only that plugin is not appearing.

I can’t find a way to see the set of metrics that Suricata defines in the documentation

The page you linked to lists the tag and fields, including the metrics for Suricata. It also shows a list of example metrics. Unlike other plugins, it looks like Suricata does not have fine-tuned parameters to adjust what tags and fields show up. Only the source location, a delimiter, and alert options.

Thanks !!!

Regarding your first point, these are logs of Telegraf:

2022-07-25T14:32:51Z I! Using config file: /etc/telegraf/telegraf.conf
2022-07-25T14:32:51Z I! Starting Telegraf 1.23.2
2022-07-25T14:32:51Z I! Loaded inputs: suricata
2022-07-25T14:32:51Z I! Loaded aggregators: 
2022-07-25T14:32:51Z I! Loaded processors: 
2022-07-25T14:32:51Z I! Loaded outputs: influxdb_v2
2022-07-25T14:32:51Z I! Tags enabled: host=fce4bed6b2b1
2022-07-25T14:32:51Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"fce4bed6b2b1", Flush Interval:10s
2022-07-25T14:32:51Z E! [telegraf] Error running agent: starting input inputs.suricata: listen unix /tmp/suricata-command.socket: bind: address already in use

This is the service of Telegraf included in the docker-compose file:

  telegraf:
    container_name: telegraf
    image: telegraf:${TELEGRAF_VERSION}
    #profiles: ["telegraf"]
    networks:
      - influx
    ports:
      - 8125:8125/udp
      - 80:80
    restart: always
    depends_on:
      - influxdb
    env_file:
      - ./telegraf/telegraf.env
    volumes:
      - ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
      - unix_stream-socket:/tmp/suricata-command.socket

… and these are the logs of Suricata:

Checking for capability sys_nice: yes
Checking for capability net_admin: yes
25/7/2022 -- 14:32:45 - <Notice> - This is Suricata version 6.0.6 RELEASE running in SYSTEM mode
25/7/2022 -- 14:32:45 - <Info> - CPUs/cores online: 8
25/7/2022 -- 14:32:45 - <Config> - app-layer.error-policy: ignore
25/7/2022 -- 14:32:45 - <Config> - 'default' server has 'request-body-minimal-inspect-size' set to 31638 and 'request-body-inspect-window' set to 4146 after randomization.
25/7/2022 -- 14:32:45 - <Config> - 'default' server has 'response-body-minimal-inspect-size' set to 39189 and 'response-body-inspect-window' set to 16880 after randomization.
25/7/2022 -- 14:32:45 - <Config> - SMB stream depth: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-read-size: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-write-size: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-write-queue-size: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-write-queue-cnt: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-read-queue-size: 0
25/7/2022 -- 14:32:45 - <Config> - SMB max-read-queue-cnt: 0
25/7/2022 -- 14:32:45 - <Config> - Protocol detection and parser disabled for modbus protocol.
25/7/2022 -- 14:32:45 - <Config> - Protocol detection and parser disabled for enip protocol.
25/7/2022 -- 14:32:45 - <Config> - Protocol detection and parser disabled for DNP3.
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_ERR_CONF_YAML_ERROR(242)] - App-Layer protocol sip enable status not set, so enabling by default. This behavior will change in Suricata 7, so please update your config. See ticket #4744 for more details.
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_ERR_CONF_YAML_ERROR(242)] - App-Layer protocol mqtt enable status not set, so enabling by default. This behavior will change in Suricata 7, so please update your config. See ticket #4744 for more details.
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_ERR_CONF_YAML_ERROR(242)] - App-Layer protocol rdp enable status not set, so enabling by default. This behavior will change in Suricata 7, so please update your config. See ticket #4744 for more details.
25/7/2022 -- 14:32:45 - <Info> - Found an MTU of 1500 for 'eth0'
25/7/2022 -- 14:32:45 - <Info> - Found an MTU of 1500 for 'eth0'
25/7/2022 -- 14:32:45 - <Config> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
25/7/2022 -- 14:32:45 - <Config> - preallocated 1000 hosts of size 136
25/7/2022 -- 14:32:45 - <Config> - host memory usage: 398144 bytes, maximum: 33554432
25/7/2022 -- 14:32:45 - <Config> - Core dump size is unlimited.
25/7/2022 -- 14:32:45 - <Config> - defrag.memcap-policy: ignore
25/7/2022 -- 14:32:45 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
25/7/2022 -- 14:32:45 - <Config> - preallocated 65535 defrag trackers of size 160
25/7/2022 -- 14:32:45 - <Config> - defrag memory usage: 14155616 bytes, maximum: 33554432
25/7/2022 -- 14:32:45 - <Config> - flow.memcap-policy: ignore
25/7/2022 -- 14:32:45 - <Config> - flow size 320, memcap allows for 419430 flows. Per hash row in perfect conditions 6
25/7/2022 -- 14:32:45 - <Config> - stream "prealloc-sessions": 2048 (per thread)
25/7/2022 -- 14:32:45 - <Config> - stream "memcap": 67108864
25/7/2022 -- 14:32:45 - <Config> - stream "midstream" session pickups: disabled
25/7/2022 -- 14:32:45 - <Config> - stream "async-oneside": disabled
25/7/2022 -- 14:32:45 - <Config> - stream "checksum-validation": enabled
25/7/2022 -- 14:32:45 - <Config> - stream.memcap-policy: ignore
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly.memcap-policy: ignore
25/7/2022 -- 14:32:45 - <Config> - memcap-policy: 0/0
25/7/2022 -- 14:32:45 - <Config> - stream."inline": disabled
25/7/2022 -- 14:32:45 - <Config> - stream "bypass": disabled
25/7/2022 -- 14:32:45 - <Config> - stream "max-synack-queued": 5
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly "memcap": 268435456
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly "depth": 1048576
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly "toserver-chunk-size": 2436
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly "toclient-chunk-size": 2620
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly.raw: enabled
25/7/2022 -- 14:32:45 - <Config> - stream.reassembly "segment-prealloc": 2048
25/7/2022 -- 14:32:45 - <Info> - dropped the caps for main thread
25/7/2022 -- 14:32:45 - <Info> - fast output device (regular) initialized: fast.log
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_ERR_SOCKET(200)] - Error connecting to socket "/var/run/suricata/suricata-command.socket": Connection refused (will keep trying)
25/7/2022 -- 14:32:45 - <Info> - Setting logging socket of non-blocking in live mode.
25/7/2022 -- 14:32:45 - <Info> - eve-log output device (unix_stream) initialized: /var/run/suricata/suricata-command.socket
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'stats'
25/7/2022 -- 14:32:45 - <Info> - eve-log output device (regular) initialized: eve.json
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'alert'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'anomaly'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'http'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'dns'
25/7/2022 -- 14:32:45 - <Config> - eve-log dns version not set, defaulting to version 2
25/7/2022 -- 14:32:45 - <Config> - eve-log dns version not set, defaulting to version 2
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'tls'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'files'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'smtp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'ftp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'rdp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'nfs'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'smb'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'tftp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'ikev2'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'dcerpc'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'krb5'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'snmp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'rfb'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'sip'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'dhcp'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'ssh'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'mqtt'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'stats'
25/7/2022 -- 14:32:45 - <Config> - enabling 'eve-log' module 'flow'
25/7/2022 -- 14:32:45 - <Info> - stats output device (regular) initialized: stats.log
25/7/2022 -- 14:32:45 - <Config> - Delayed detect disabled
25/7/2022 -- 14:32:45 - <Info> - Running in live mode, activating unix socket
25/7/2022 -- 14:32:45 - <Config> - pattern matchers: MPM: hs, SPM: hs
25/7/2022 -- 14:32:45 - <Config> - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
25/7/2022 -- 14:32:45 - <Config> - grouping: udp-whitelist (default) 53, 135, 5060
25/7/2022 -- 14:32:45 - <Config> - prefilter engines: MPM
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_uri
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_uri
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_uri
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_uri
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_request_line
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_client_body
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_response_line
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header_names
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header_names
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header_names
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_header_names
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept_enc
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept_enc
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept_lang
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_accept_lang
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_referer
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_referer
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_connection
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_connection
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_len
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_len
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_len
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_len
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_type
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_type
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_type
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_content_type
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http.server
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http.server
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http.location
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http.location
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_protocol
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_protocol
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_start
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_start
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_method
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_method
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_cookie
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_cookie
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_cookie
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_cookie
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file.magic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_user_agent
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_user_agent
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_host
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_host
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_host
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_raw_host
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_stat_msg
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_stat_code
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http_stat_code
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http2_header_name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http2_header_name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http2_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for http2_header
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dns_query
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dnp3_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dnp3_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.sni
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.cert_issuer
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.cert_subject
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.cert_serial
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.cert_fingerprint
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tls.certs
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ja3.hash
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ja3.string
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ja3s.hash
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ja3s.string
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dce_stub_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dce_stub_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dce_stub_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for dce_stub_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for smb_named_pipe
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for smb_share
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.proto
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.proto
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh_software
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh_software
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.hassh
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.hassh.server
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.hassh.string
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ssh.hassh.server.string
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for file_data
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for krb5_cname
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for krb5_sname
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.method
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.uri
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.protocol
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.protocol
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.method
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.stat_msg
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.request_line
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for sip.response_line
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for rfb.name
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for snmp.community
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for snmp.community
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.connect.clientid
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.connect.username
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.connect.password
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.connect.willtopic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.connect.willmessage
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.publish.topic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.publish.message
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.subscribe.topic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for mqtt.unsubscribe.topic
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for icmpv4.hdr
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tcp.hdr
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for udp.hdr
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for icmpv6.hdr
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ipv4.hdr
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for ipv6.hdr
25/7/2022 -- 14:32:45 - <Config> - IP reputation disabled
25/7/2022 -- 14:32:45 - <Config> - Loading rule file: /var/lib/suricata/rules/emerging-exploit.rules
25/7/2022 -- 14:32:45 - <Config> - Loading rule file: /var/lib/suricata/rules/my-rules
25/7/2022 -- 14:32:45 - <Info> - 2 rule files processed. 1428 rules successfully loaded, 0 rules failed
25/7/2022 -- 14:32:45 - <Info> - Threshold config parsed: 0 rule(s) found
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tcp-packet
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for tcp-stream
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for udp-packet
25/7/2022 -- 14:32:45 - <Perf> - using shared mpm ctx' for other-ip
25/7/2022 -- 14:32:45 - <Info> - 1428 signatures processed. 2 are IP-only rules, 378 are inspecting packet payload, 1047 inspect application layer, 0 are decoder event only
25/7/2022 -- 14:32:45 - <Config> - building signature grouping structure, stage 1: preprocessing rules... complete
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.http.binary' is checked but not set. Checked in 2025195 and 1 other sigs
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.http.javaclient' is checked but not set. Checked in 2015658 and 1 other sigs
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'et.IE7.NoRef.NoCookie' is checked but not set. Checked in 2024192 and 1 other sigs
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.gocd.auth' is checked but not set. Checked in 2034333 and 0 other sigs
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'dcerpc.rpcnetlogon' is checked but not set. Checked in 2030870 and 6 other sigs
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.BonitaDefaultCreds' is checked but not set. Checked in 2036817 and 0 other sigs
25/7/2022 -- 14:32:45 - <Perf> - TCP toserver: 41 port groups, 37 unique SGH's, 4 copies
25/7/2022 -- 14:32:45 - <Warning> - [ERRCODE: SC_WARN_POOR_RULE(276)] - rule 6: SYN-only to port(s) 80:80 w/o direction specified, disabling for toclient direction
25/7/2022 -- 14:32:45 - <Perf> - TCP toclient: 21 port groups, 11 unique SGH's, 10 copies
25/7/2022 -- 14:32:45 - <Perf> - UDP toserver: 30 port groups, 15 unique SGH's, 15 copies
25/7/2022 -- 14:32:45 - <Perf> - UDP toclient: 7 port groups, 4 unique SGH's, 3 copies
25/7/2022 -- 14:32:45 - <Perf> - OTHER toserver: 2 proto groups, 1 unique SGH's, 1 copies
25/7/2022 -- 14:32:45 - <Perf> - OTHER toclient: 2 proto groups, 0 unique SGH's, 2 copies
25/7/2022 -- 14:32:45 - <Perf> - Unique rule groups: 68
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toserver TCP packet": 23
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toclient TCP packet": 7
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toserver TCP stream": 31
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toclient TCP stream": 10
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toserver UDP packet": 15
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "toclient UDP packet": 4
25/7/2022 -- 14:32:45 - <Perf> - Builtin MPM "other IP packet": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_uri (http)": 8
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_uri (http2)": 8
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_raw_uri (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_raw_uri (http2)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_request_line (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_client_body (http)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_header (http)": 4
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_header (http)": 4
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_header (http2)": 4
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_header (http2)": 4
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_header_names (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_header_names (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_header_names (http2)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_header_names (http2)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_accept_enc (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_accept_enc (http2)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_referer (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_referer (http2)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_content_type (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_content_type (http2)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_content_type (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_content_type (http2)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http.server (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http.server (http2)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_start (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_start (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_cookie (http)": 3
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_cookie (http)": 3
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_cookie (http2)": 3
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_cookie (http2)": 3
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_user_agent (http)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver http_user_agent (http2)": 2
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_stat_code (http)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient http_stat_code (http2)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver tls.sni (tls)": 1
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver file_data (smtp)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient file_data (http)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver file_data (smb)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient file_data (smb)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toserver file_data (http2)": 5
25/7/2022 -- 14:32:45 - <Perf> - AppLayer MPM "toclient file_data (http2)": 5
25/7/2022 -- 14:32:48 - <Config> - Using flow cluster mode for AF_PACKET (iface eth0)
25/7/2022 -- 14:32:48 - <Config> - Using defrag kernel functionality for AF_PACKET (iface eth0)
25/7/2022 -- 14:32:48 - <Perf> - 8 cores, so using 8 threads
25/7/2022 -- 14:32:48 - <Perf> - Using 8 AF_PACKET threads for interface eth0
25/7/2022 -- 14:32:48 - <Config> - eth0: enabling zero copy mode by using data release call
25/7/2022 -- 14:32:48 - <Info> - Going to use 8 thread(s)
25/7/2022 -- 14:32:48 - <Config> - using 1 flow manager threads
25/7/2022 -- 14:32:48 - <Config> - using 1 flow recycler threads
25/7/2022 -- 14:32:48 - <Info> - Running in live mode, activating unix socket
25/7/2022 -- 14:32:48 - <Info> - Using unix socket file '/var/run/suricata/suricata-command.socket'
25/7/2022 -- 14:32:48 - <Warning> - [ERRCODE: SC_ERR_INITIALIZATION(45)] - Unix socket: UNIX socket bind(/var/run/suricata/suricata-command.socket) error: Address already in use
25/7/2022 -- 14:32:48 - <Warning> - [ERRCODE: SC_ERR_INITIALIZATION(45)] - Unable to create unix command socket
25/7/2022 -- 14:32:48 - <Notice> - all 8 packet processing threads, 4 management threads initialized, engine started.
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=13 frame_size=1600 frame_nr=260
25/7/2022 -- 14:32:48 - <Info> - All AFP capture threads are running.

… and this the suricata service in the docker-compose:

  suricata:
    image: jasonish/suricata:${SURICATA_VERSION}
    container_name: suricata
    #profiles: ["suricata"]
    restart: always
    env_file:
      - './suricata/env.suricata'
    network_mode: "host"
    cap_add: 
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
    ports:
      - 80:80
    volumes:
      - ./suricata/suricata.yaml:/etc/suricata/suricata.yaml
      - ./suricata/log:/var/log/suricata
      - ./suricata/rules:/var/lib/suricata/rules
      - unix_stream-socket:/var/run/suricata/suricata-command.socket
  1. I guess Suricata does not allow you to specify between metrics.
listen unix /tmp/suricata-command.socket: bind: address already in use

This is an error message meaning that the socket is already in use by another program. Are there multiple containers running?

You may want to consider trying to get this going outside of docker first to reduce the complexity.

It is strange since I have disabled InfluxDB for the moment and I only have two containers: Suricata and Telegraf. I understand that the Telegraf agent is created for listening while Suricata opens the socket connection. Is it?

Correct.

I was reading through your log messages more carefully and the suricata log had this in it a few times:

Is there a way for you to confirm that the socket is even available after starting suricata? I was looking at these docs to see if there are other ways to test the socket outside of telegraf: 20. Interacting via Unix Socket — Suricata 6.0.0 documentation

Thanks !!!

After an update in the suricata.yaml file, I got these logs:

25/7/2022 -- 19:30:55 - <Info> - Found an MTU of 1500 for 'eth0'
25/7/2022 -- 19:30:55 - <Info> - Found an MTU of 1500 for 'eth0'
25/7/2022 -- 19:30:55 - <Error> - [ERRCODE: SC_ERR_INVALID_VALUE(130)] - Problem with config file
25/7/2022 -- 19:30:55 - <Info> - No 'host-mode': suricata is in IDS mode, using default setting 'sniffer-only'
25/7/2022 -- 19:30:55 - <Info> - dropped the caps for main thread
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_ERR_SOCKET(200)] - Error connecting to socket "/var/run/suricata/suricata-command.socket": No such file or directory (will keep trying)
25/7/2022 -- 19:30:55 - <Info> - Setting logging socket of non-blocking in live mode.
25/7/2022 -- 19:30:55 - <Info> - eve-log output device (unix_stream) initialized: /var/run/suricata/suricata-command.socket
25/7/2022 -- 19:30:55 - <Info> - 2 rule files processed. 1428 rules successfully loaded, 0 rules failed
25/7/2022 -- 19:30:55 - <Info> - Threshold config parsed: 0 rule(s) found
25/7/2022 -- 19:30:55 - <Info> - 1428 signatures processed. 2 are IP-only rules, 378 are inspecting packet payload, 1047 inspect application layer, 0 are decoder event only
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.http.binary' is checked but not set. Checked in 2025195 and 1 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.http.javaclient' is checked but not set. Checked in 2015658 and 1 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'et.IE7.NoRef.NoCookie' is checked but not set. Checked in 2024192 and 1 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.gocd.auth' is checked but not set. Checked in 2034333 and 0 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'dcerpc.rpcnetlogon' is checked but not set. Checked in 2030870 and 6 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_FLOWBIT(306)] - flowbit 'ET.BonitaDefaultCreds' is checked but not set. Checked in 2036817 and 0 other sigs
25/7/2022 -- 19:30:55 - <Warning> - [ERRCODE: SC_WARN_POOR_RULE(276)] - rule 6: SYN-only to port(s) 80:80 w/o direction specified, disabling for toclient direction
25/7/2022 -- 19:30:57 - <Info> - unable to find af-packet config for interface "eth0" or "default", using default values
25/7/2022 -- 19:30:57 - <Info> - Going to use 8 thread(s)
25/7/2022 -- 19:30:57 - <Notice> - all 8 packet processing threads, 4 management threads initialized, engine started.
25/7/2022 -- 19:30:58 - <Info> - All AFP capture threads are running.

As you can see the socket is initialized. However, when I use the recommended tool suricatasc with this error:

[root@ip-172-31-31-38 /]# suricatasc 
Unable to connect to socket /var/run/suricata/suricata-command.socket: [Errno 2] No such file or directory

What user:group owns the socket file: ls -l /var/run/suricata/suricata-command.socket

Although it is initialized in the log, it does not appear inside the container.

[root@ip-172-31-31-38 /]# ls -l /var/run/suricata/suricata-command.socket
ls: cannot access '/var/run/suricata/suricata-command.socket': No such file or directory

Here a snapshot of ss -t -a commnad:

State           Recv-Q      Send-Q            Local Address:Port                       Peer Address:Port       Process      
LISTEN          0           4096                    0.0.0.0:cap                             0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:4200                            0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:houston                         0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:27017                           0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:etlservicemgr                   0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:redis                           0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:corelccam                       0.0.0.0:*                       
LISTEN          0           4096              127.0.0.53%lo:domain                          0.0.0.0:*                       
LISTEN          0           128                     0.0.0.0:ssh                             0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:hbci                            0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:origo-native                    0.0.0.0:*                       
LISTEN          0           511                   127.0.0.1:46875                           0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:cgms                            0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:mqtt                            0.0.0.0:*                       
LISTEN          0           4096                    0.0.0.0:8668                            0.0.0.0:*                       
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:57852                   
ESTAB           0           0                     127.0.0.1:40824                         127.0.0.1:46875                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:36000                   
ESTAB           0           0                     127.0.0.1:40862                         127.0.0.1:46875                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:40468                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:36616                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54526                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54460                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:35998                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54338                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:41064                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:57856                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:56620                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54276                   
SYN-RECV        0           0                  172.31.31.38:ssh                       61.177.173.54:52280                   
ESTAB           0           0                     127.0.0.1:46875                         127.0.0.1:40824                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:36110                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:58012                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:58134                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:40402                   
SYN-RECV        0           0                  172.31.31.38:ssh                       61.177.173.54:jstel                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:57944                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54198                   
ESTAB           0           128                172.31.31.38:ssh                        79.149.57.84:44456                   
SYN-RECV        0           0                  172.31.31.38:ssh                       61.177.173.54:26364                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:56624                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:58072                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:36088                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:48204                   
SYN-RECV        0           0                  172.31.31.38:ssh                        47.24.76.147:49856                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54194                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:36618                   
ESTAB           0           0                     127.0.0.1:46875                         127.0.0.1:40862                   
CLOSE-WAIT      0           0                     127.0.0.1:46875                         127.0.0.1:54398                   
LISTEN          0           4096                       [::]:cap                                [::]:*                       
LISTEN          0           4096                       [::]:4200                               [::]:*                       
LISTEN          0           4096                       [::]:houston                            [::]:*                       
LISTEN          0           4096                       [::]:27017                              [::]:*                       
LISTEN          0           4096                       [::]:etlservicemgr                      [::]:*                       
LISTEN          0           4096                       [::]:redis                              [::]:*                       
LISTEN          0           4096                       [::]:corelccam                          [::]:*                       
LISTEN          0           128                        [::]:ssh                                [::]:*                       
LISTEN          0           4096                       [::]:hbci                               [::]:*                       
LISTEN          0           4096                       [::]:origo-native                       [::]:*                       
LISTEN          0           4096                       [::]:cgms                               [::]:*                       
LISTEN          0           4096                       [::]:mqtt                               [::]:*                       
LISTEN          0           4096                       [::]:8668                               [::]:*

Although it is initialized in the log, it does not appear inside the container.

Which container are you looking in?

If this is in the suricata container itself then you are going to need to figure out why the suricata command is not creating the file.

If this is in the Telegraf container, then check the suricata container first.

I don’t have much experience mounting sockets between containers, so I’m not entirely sure the configs are correct.

Thanks !!!. I think that socket issue is solved by sharing a volume between containers:

Suricata service:

  suricata:
    image: jasonish/suricata:${SURICATA_VERSION}
    container_name: suricata
    #user: root
    #profiles: ["suricata"]
    restart: on-failure
    depends_on:
      - telegraf
    env_file:
      - './suricata/env.suricata'
    network_mode: "host"
    cap_add: 
      - NET_ADMIN
      - SYS_NICE
      - NET_RAW
    volumes:
      - ./suricata/suricata.yaml:/etc/suricata/suricata.yaml
      - ./suricata/log:/var/log/suricata
      - ./suricata/rules:/var/lib/suricata/rules
      - /var/run/shared:/var/run/suricata/

Telegraf configuration:

  telegraf:
    container_name: telegraf
    image: telegraf:${TELEGRAF_VERSION}
    user: root
   #profiles: ["telegraf"]
    networks:
      - influx
    ports:
      - 8125:8125/udp
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8086/ping"]
      interval: 10s
      timeout: 10s
      retries: 5
    restart: always
    depends_on:
      - influxdb
    env_file:
      - ./telegraf/telegraf.env
    volumes:
      - ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
      - /var/run/shared:/var/run/

Here is the volume in the host:

ubuntu@ip-172-31-31-38:~/composer-suri-tele-infl-graf$ ls -la /var/run/shared/
total 0
drwxrwxrwx  2 lxd   996   60 Jul 25 21:21 .
drwxr-xr-x 29 root root 1060 Jul 25 20:53 ..
srw-rw----  1 lxd   996    0 Jul 25 21:21 suricata-command.socket
ubuntu@ip-172-31-31-38:~/composer-suri-tele-infl-graf$

Here is the test using the suggested tool:

[root@ip-172-31-31-38 /]# suricatasc
Command list: shutdown, command-list, help, version, uptime, running-mode, capture-mode, conf-get, dump-counters, reload-rules, ruleset-reload-rules, ruleset-reload-nonblocking, ruleset-reload-time, ruleset-stats, ruleset-failed-rules, register-tenant-handler, unregister-tenant-handler, register-tenant, reload-tenant, unregister-tenant, add-hostbit, remove-hostbit, list-hostbit, reopen-log-files, memcap-set, memcap-show, memcap-list, dataset-add, dataset-remove, iface-stat, iface-list, iface-bypassed-stat, ebpf-bypassed-stat, quit
>>> version
Success:
"6.0.6 RELEASE"
>>> capture-mode
Success:
"AF_PACKET_DEV"
>>> iface-list
Success:
{
    "count": 1,
    "ifaces": [
        "eth0"
    ]
}
>>> iface-stat eth0
Success:
{
    "bypassed": 0,
    "drop": 0,
    "invalid-checksums": 0,
    "pkts": 9198
}
>>>

However, I still don’t see suricata’s metrics in Influx:

Any recommendation?

Thanks !!!. I think that socket issue is solved by sharing a volume between containers:

Awesome!

      - /var/run/shared:/var/run/

This means the socket would be available at /var/run/suricata-command.socket in the telegraf container? Is that path in your telegraf config?

Let’s have you run ls -l /var/run/suricata-command.socket in the telegraf container as well if possible.

However, I still don’t see suricata’s metrics in Influx:

Can we go back to the telegraf logs and see if there is a new error message as well?

Thanks!!!

Question1:

This means the socket would be available at /var/run/suricata-command.socket in the telegraf container? Is that path in your telegraf config?

Here is the path in telegraf.conf:

[[inputs.suricata]]
  ## Data sink for Suricata stats and alerts logs
  # This is expected to be a filename of a
  # unix socket to be created for listening.
  source = "/var/run/suricata-command.socket"

  # Delimiter for flattening field keys, e.g. subitem "alert" of "detect"
  # becomes "detect_alert" when delimiter is "_".
  delimiter = "_"

  ## Detect alert logs
  alerts = true

Question2:

Let’s have you run `ls -l /var/run/suricata-command.socket` in the telegraf container as well if possible.

Yes, here is the socket in the telegraf container:

root@390b4431fb3e:/var/run# ls -la
total 4
drwxrwxrwx 2  998  996   60 Jul 25 21:49 .
drwxr-xr-x 1 root root 4096 Jul 25 21:49 ..
srw-rw---- 1  998  996    0 Jul 25 21:49 suricata-command.socket
root@390b4431fb3e:/var/run#

Question3:

Can we go back to the telegraf logs and see if there is a new error message as well?
> Executing task: docker logs --tail 1000 -f 390b4431fb3e0b65a51e2f986da7193e78a0e6780dad0c229201bc83440ec268 <

2022-07-25T21:49:15Z I! Using config file: /etc/telegraf/telegraf.conf
2022-07-25T21:49:15Z W! DeprecationWarning: Option "parse_data_dog_tags" of plugin "inputs.statsd" deprecated since version 1.10.0 and will be removed in 2.0.0: use 'datadog_extensions' instead
2022-07-25T21:49:15Z I! Starting Telegraf 1.23.2
2022-07-25T21:49:15Z I! Loaded inputs: cpu disk diskio kernel mem processes statsd suricata swap system
2022-07-25T21:49:15Z I! Loaded aggregators: 
2022-07-25T21:49:15Z I! Loaded processors: 
2022-07-25T21:49:15Z I! Loaded outputs: influxdb_v2
2022-07-25T21:49:15Z I! Tags enabled: host=390b4431fb3e user=$USER
2022-07-25T21:49:15Z W! Deprecated inputs: 0 and 1 options
2022-07-25T21:49:15Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"390b4431fb3e", Flush Interval:10s
2022-07-25T21:49:15Z I! [inputs.statsd] UDP listening on "[::]:8125"
2022-07-25T21:49:15Z I! [inputs.statsd] Started the statsd service on ":8125"

Hi. I have installed suricatasc tool on telegraf container in order to test the socket:

root@dd39f97b4f3f:/suricata/python# suricatasc
Unable to connect to socket @e_localstatedir@/suricata-command.socket: [Errno 2] No such file or directory
root@dd39f97b4f3f:/suricata/python# suricatasc /var/run/suricata-command.socket
Command list: shutdown, command-list, help, version, uptime, running-mode, capture-mode, conf-get, dump-counters, reload-rules, ruleset-reload-rules, ruleset-reload-nonblocking, ruleset-reload-time, ruleset-stats, ruleset-failed-rules, register-tenant-handler, unregister-tenant-handler, register-tenant, reload-tenant, unregister-tenant, add-hostbit, remove-hostbit, list-hostbit, reopen-log-files, memcap-set, memcap-show, memcap-list, dataset-add, dataset-remove, iface-stat, iface-list, iface-bypassed-stat, ebpf-bypassed-stat, quit
>>> iface-list
Success:
{
    "count": 1,
    "ifaces": [
        "eth0"
    ]
}
>>> iface-stat eth0
Success:
{
    "bypassed": 0,
    "drop": 11284,
    "invalid-checksums": 2,
    "pkts": 183099
}

Thanks for those logs.

It does look like the Telegraf container can finally see the socket. As long as the Telegraf user has the uid 998 or the group has 996 this should be working.

Have you seen any additional error messages from Telegraf? I would let it run for at least 5 minutes to ensure it tries a few collection intervals.

Sorry for the delay in replying.

Finally, the issue has been solved, considering the order of deployment of the containers (InfluxDB → Telegraf → Suricata), due to the creation of the socket.

All procedures followed have been documented in the following GitHub repository. I have also included the troubleshooting carried out.