Sending GELF via HTTP - error "is missing mandatory . . . field"

I’m using Python requests to send GELF that looks like this:

{"full_message": "ProcessRollup2V12", "host": "fillmein", "version": "1.1", "timestamp": 1545401488441, "short_message": "csdatareplicator"}

The server responds with the 202 but the messages are never processed. I can see these in server.log, but clearly my JSON does have those attributes.

WARN  [GelfCodec] GELF message is missing mandatory "host" field.
ERROR [DecodingProcessor] Unable to decode raw message 
ERROR [DecodingProcessor] Error processing message
java.lang.IllegalArgumentException: GELF message is missing mandatory "short_message" or "message" field.

Code snippet below:

gelf_log={
    "version":"1.1",
    "host":"fillmein",
    "short_message": "csdatareplicator",
    "full_message": cs_event["name"],
    "timestamp": int(cs_event["timestamp"])
  } 

gelf_string = json.dumps(gelf_log).encode('utf-8')
print gelf_string

r = requests.post(
  graylog_input,
  json=gelf_string,
  headers=headers,
  **rkwargs
)

Huh, would you look at that? At first glance that does look like it should work… I’ll poke around a little, see what I can find.

Can you confirm that the GELF message arrives in the right format and state, by sending it to a Netcat listener? So you’d skip the HTTP POST, but instead use a raw ASCII over TCP/IP connection. You could also run a basic HTTP daemon (like Python’s BaseHTTPServer) instead, to see what data comes in.

EDIT :
Hang on… I’m not on the up-and-up with Python. Is requests.post a HTTP POST call? Or is it something else? Cause based on the discussion below the GELF input on Graylog would be a simple TCP receiver like Netcat.

Which btw also mentions that GELF lines need to be terminated with a \0.

I’m using this input - http://docs.graylog.org/en/2.5/pages/sending_data.html#gelf-via-http

I posted my document in BaseHTTPServer and do get a valid JSON document:
{“full_message”: “ProcessRollup2V12”, “host”: “fillmein”, “version”: “1.1”, “timestamp”: 1545401488441, “short_message”: “csdatareplicator”}

Still investigating

1 Like

Silly question, what happens when you send the test-message through Curl (as shown in the example on the page you linked to)?

1 Like

With curl, I get the 202 from the client side and get the error below (one time only - as if something has died server side). I think I must have a configuration problem deeper than this input.

2019-01-09T03:10:04.205Z INFO  [InputStateListener] Input [GELF HTTP/5c3295d691aff90406a7e391] is now RUNNING
2019-01-09T03:12:56.457Z WARN  [ProxiedResource] Unable to call http://192.168.56.3:9000/api/system/jobs on node <e0f94c5e-6fd8-443e-9d57-a5dcf5ed563a>
java.net.SocketTimeoutException: timeout
        at okio.Okio$4.newTimeoutException(Okio.java:230) ~[graylog.jar:?]
        at okio.AsyncTimeout.exit(AsyncTimeout.java:285) ~[graylog.jar:?]
        at okio.AsyncTimeout$2.read(AsyncTimeout.java:241) ~[graylog.jar:?]
        at okio.RealBufferedSource.indexOf(RealBufferedSource.java:345) ~[graylog.jar:?]

Huh! Yeah, that seems iffy mate :confused: Time to poke around in server.log.

If I understand it right the problem does not seem to be on your client side but on the server side.

As @Totally_Not_A_Robot suggest I would look around in the server.log if you find other errors and backtraces there.

Do you mind sharing your Input configuration via screenshot with us, so I can try to reproduce your problem?
And what version of Graylog exactly do you running?

Thank you for looking. As you have probably detected, it’s my first foray into Graylog.

System background:

Version:     2.5.1+34194da, codename Trippy Trampoline
JVM:    Oracle Corporation 1.8.0_191 on Linux 4.4.0-131-generic
REST Transport address:     http://192.168.173.65:9000/api/

server.conf:

is_master = true
node_id_file = /etc/graylog/server/node-id
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://0.0.0.0:9000/api/
web_listen_uri = http://0.0.0.0:9000/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32

server.log (sorry for pasting inline - didn’t see an attachment option)

2019-01-09T11:49:18.487Z INFO  [CmdLineTool] Loaded plugin: AWS plugins 2.5.1 [org.graylog.aws.plugin.AWSPlugin]
2019-01-09T11:49:18.490Z INFO  [CmdLineTool] Loaded plugin: Elastic Beats Input 2.5.1 [org.graylog.plugins.beats.BeatsInputPlugin]
2019-01-09T11:49:18.491Z INFO  [CmdLineTool] Loaded plugin: CEF Input 2.5.1 [org.graylog.plugins.cef.CEFInputPlugin]
2019-01-09T11:49:18.492Z INFO  [CmdLineTool] Loaded plugin: Collector 2.5.1 [org.graylog.plugins.collector.CollectorPlugin]
2019-01-09T11:49:18.494Z INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.5.1 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2019-01-09T11:49:18.495Z INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.5.1 [org.graylog.plugins.map.MapWidgetPlugin]
2019-01-09T11:49:18.500Z INFO  [CmdLineTool] Loaded plugin: NetFlow Plugin 2.5.1 [org.graylog.plugins.netflow.NetFlowPlugin]
2019-01-09T11:49:18.517Z INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.5.1 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2019-01-09T11:49:18.517Z INFO  [CmdLineTool] Loaded plugin: Threat Intelligence Plugin 2.5.1 [org.graylog.plugins.threatintel.ThreatIntelPlugin]
2019-01-09T11:49:19.300Z INFO  [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configurationFile=file:///etc/graylog/server/log4j2.xml -Djava.library.path=/usr/share/graylog-server/lib/sigar -Dgraylog2.installation_source=deb
2019-01-09T11:49:20.137Z INFO  [Version] HV000001: Hibernate Validator 5.1.3.Final
2019-01-09T11:49:25.967Z INFO  [InputBufferImpl] Message journal is enabled.
2019-01-09T11:49:26.016Z INFO  [NodeId] Node ID: e0f94c5e-6fd8-443e-9d57-a5dcf5ed563a
2019-01-09T11:49:26.453Z INFO  [LogManager] Loading logs.
2019-01-09T11:49:26.561Z INFO  [LogManager] Logs loading complete.
2019-01-09T11:49:26.561Z INFO  [KafkaJournal] Initialized Kafka based journal at /var/lib/graylog-server/journal
2019-01-09T11:49:26.582Z INFO  [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy <BlockingWaitStrategy>, running 2 parallel message handlers.
2019-01-09T11:49:26.628Z INFO  [cluster] Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=5000}
2019-01-09T11:49:26.840Z INFO  [cluster] No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=localhost:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out
2019-01-09T11:49:26.887Z INFO  [connection] Opened connection [connectionId{localValue:1, serverValue:24}] to localhost:27017
2019-01-09T11:49:26.889Z INFO  [cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 9]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, roundTripTimeNanos=1436077}
2019-01-09T11:49:26.910Z INFO  [connection] Opened connection [connectionId{localValue:2, serverValue:25}] to localhost:27017
2019-01-09T11:49:27.749Z INFO  [AbstractJestClient] Setting server pool to a list of 1 servers: [http://127.0.0.1:9200]
2019-01-09T11:49:27.750Z INFO  [JestClientFactory] Using multi thread/connection supporting pooling connection manager
2019-01-09T11:49:27.895Z INFO  [JestClientFactory] Using custom ObjectMapper instance
2019-01-09T11:49:27.895Z INFO  [JestClientFactory] Node Discovery disabled...
2019-01-09T11:49:27.895Z INFO  [JestClientFactory] Idle connection reaping disabled...
2019-01-09T11:49:28.528Z INFO  [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2019-01-09T11:49:31.657Z INFO  [RulesEngineProvider] No static rules file loaded.
2019-01-09T11:49:32.253Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2019-01-09T11:49:32.297Z INFO  [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy <BlockingWaitStrategy>.
2019-01-09T11:49:32.356Z INFO  [connection] Opened connection [connectionId{localValue:3, serverValue:26}] to localhost:27017
2019-01-09T11:49:32.376Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2019-01-09T11:49:32.462Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2019-01-09T11:49:32.544Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2019-01-09T11:49:32.641Z WARN  [GeoIpResolverEngine] GeoIP database file does not exist: /etc/graylog/server/GeoLite2-City.mmdb
2019-01-09T11:49:33.378Z INFO  [ServerBootstrap] Graylog server 2.5.1+34194da starting up
2019-01-09T11:49:33.379Z INFO  [ServerBootstrap] JRE: Oracle Corporation 1.8.0_191 on Linux 4.4.0-131-generic
2019-01-09T11:49:33.381Z INFO  [ServerBootstrap] Deployment: deb
2019-01-09T11:49:33.382Z INFO  [ServerBootstrap] OS: Ubuntu 16.04.5 LTS (xenial)
2019-01-09T11:49:33.382Z INFO  [ServerBootstrap] Arch: amd64
2019-01-09T11:49:33.492Z INFO  [PeriodicalsService] Starting 25 periodicals ...
2019-01-09T11:49:33.492Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThroughputCalculator] periodical in [0s], polling every [1s].
2019-01-09T11:49:33.500Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlertScannerThread] periodical in [10s], polling every [60s].
2019-01-09T11:49:33.501Z INFO  [Periodicals] Starting [org.graylog2.periodical.BatchedElasticSearchOutputFlushThread] periodical in [0s], polling every [1s].
2019-01-09T11:49:33.501Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [120s], polling every [20s].
2019-01-09T11:49:33.508Z INFO  [Periodicals] Starting [org.graylog2.periodical.ContentPackLoaderPeriodical] periodical, running forever.
2019-01-09T11:49:33.512Z INFO  [Periodicals] Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2019-01-09T11:49:33.514Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2019-01-09T11:49:33.515Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRetentionThread] periodical in [0s], polling every [300s].
2019-01-09T11:49:33.518Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRotationThread] periodical in [0s], polling every [10s].
2019-01-09T11:49:33.523Z INFO  [Periodicals] Starting [org.graylog2.periodical.NodePingThread] periodical in [0s], polling every [1s].
2019-01-09T11:49:33.525Z INFO  [Periodicals] Starting [org.graylog2.periodical.VersionCheckThread] periodical in [300s], polling every [1800s].
2019-01-09T11:49:33.528Z INFO  [Periodicals] Starting [org.graylog2.periodical.ThrottleStateUpdaterThread] periodical in [1s], polling every [1s].
2019-01-09T11:49:33.528Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventPeriodical] periodical in [0s], polling every [1s].
2019-01-09T11:49:33.545Z INFO  [connection] Opened connection [connectionId{localValue:4, serverValue:27}] to localhost:27017
2019-01-09T11:49:33.561Z INFO  [connection] Opened connection [connectionId{localValue:5, serverValue:28}] to localhost:27017
2019-01-09T11:49:33.566Z INFO  [Periodicals] Starting [org.graylog2.events.ClusterEventCleanupPeriodical] periodical in [0s], polling every [86400s].
2019-01-09T11:49:33.601Z INFO  [Periodicals] Starting [org.graylog2.periodical.ClusterIdGeneratorPeriodical] periodical, running forever.
2019-01-09T11:49:33.604Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesMigrationPeriodical] periodical, running forever.
2019-01-09T11:49:33.604Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexRangesCleanupPeriodical] periodical in [15s], polling every [3600s].
2019-01-09T11:49:33.613Z INFO  [connection] Opened connection [connectionId{localValue:6, serverValue:29}] to localhost:27017
2019-01-09T11:49:33.623Z INFO  [connection] Opened connection [connectionId{localValue:8, serverValue:30}] to localhost:27017
2019-01-09T11:49:33.626Z INFO  [connection] Opened connection [connectionId{localValue:9, serverValue:31}] to localhost:27017
2019-01-09T11:49:33.632Z INFO  [connection] Opened connection [connectionId{localValue:7, serverValue:32}] to localhost:27017
2019-01-09T11:49:33.795Z INFO  [PeriodicalsService] Not starting [org.graylog2.periodical.UserPermissionMigrationPeriodical] periodical. Not configured to run on this node.
2019-01-09T11:49:33.795Z INFO  [Periodicals] Starting [org.graylog2.periodical.AlarmCallbacksMigrationPeriodical] periodical, running forever.
2019-01-09T11:49:33.806Z INFO  [Periodicals] Starting [org.graylog2.periodical.ConfigurationManagementPeriodical] periodical, running forever.
2019-01-09T11:49:33.845Z INFO  [Periodicals] Starting [org.graylog2.periodical.LdapGroupMappingMigration] periodical, running forever.
2019-01-09T11:49:33.849Z INFO  [Periodicals] Starting [org.graylog2.periodical.IndexFailuresPeriodical] periodical, running forever.
2019-01-09T11:49:33.852Z INFO  [Periodicals] Starting [org.graylog2.periodical.TrafficCounterCalculator] periodical in [0s], polling every [1s].
2019-01-09T11:49:33.867Z INFO  [Periodicals] Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
2019-01-09T11:49:33.877Z INFO  [Periodicals] Starting [org.graylog.plugins.collector.periodical.PurgeExpiredCollectorsThread] periodical in [0s], polling every [3600s].
2019-01-09T11:49:33.903Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-domains/5c30ad9291aff97b13e87f89 [@5ef81738] STARTING
2019-01-09T11:49:33.903Z ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-domains/5c30ad9291aff97b13e87f89/@5ef81738>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Abuse.ch service is disabled, not starting adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.abusech.AbuseChRansomAdapter.doStart(AbuseChRansomAdapter.java:80) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-01-09T11:49:33.909Z ERROR [LookupDataAdapter] Couldn't start data adapter <spamhaus-drop/5c30ad9291aff97b13e87f8c/@3eb65010>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Spamhaus service is disabled, not starting (E)DROP adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.spamhaus.SpamhausEDROPDataAdapter.doStart(SpamhausEDROPDataAdapter.java:68) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-01-09T11:49:33.913Z ERROR [LookupDataAdapter] Couldn't start data adapter <tor-exit-node/5c30ad9291aff97b13e87f88/@5ee8e704>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: TOR service is disabled, not starting TOR exit addresses adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.tor.TorExitNodeDataAdapter.doStart(TorExitNodeDataAdapter.java:73) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-01-09T11:49:33.908Z WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2019-01-09T11:49:33.908Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-ip/5c30ad9291aff97b13e87f8a [@6d38bc78] STARTING
2019-01-09T11:49:33.928Z WARN  [OTXDataAdapter] OTX API key is missing. Make sure to add the key to allow higher request limits.
2019-01-09T11:49:33.945Z INFO  [LookupTableService] Data Adapter otx-api-ip/5c30ad9291aff97b13e87f86 [@71a74e46] STARTING
2019-01-09T11:49:33.948Z INFO  [LookupTableService] Data Adapter spamhaus-drop/5c30ad9291aff97b13e87f8c [@3eb65010] STARTING
2019-01-09T11:49:33.948Z INFO  [LookupTableService] Data Adapter spamhaus-drop/5c30ad9291aff97b13e87f8c [@3eb65010] RUNNING
2019-01-09T11:49:33.949Z INFO  [LookupTableService] Data Adapter tor-exit-node/5c30ad9291aff97b13e87f88 [@5ee8e704] STARTING
2019-01-09T11:49:33.949Z INFO  [LookupTableService] Data Adapter tor-exit-node/5c30ad9291aff97b13e87f88 [@5ee8e704] RUNNING
2019-01-09T11:49:33.949Z INFO  [LookupTableService] Data Adapter otx-api-domain/5c30ad9291aff97b13e87f8b [@70ba9717] STARTING
2019-01-09T11:49:33.947Z INFO  [LookupTableService] Data Adapter whois/5c30ad9291aff97b13e87f8d [@60a47f80] STARTING
2019-01-09T11:49:33.951Z INFO  [LookupTableService] Data Adapter whois/5c30ad9291aff97b13e87f8d [@60a47f80] RUNNING
2019-01-09T11:49:33.953Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-domains/5c30ad9291aff97b13e87f89 [@5ef81738] RUNNING
2019-01-09T11:49:33.907Z ERROR [LookupDataAdapter] Couldn't start data adapter <abuse-ch-ransomware-ip/5c30ad9291aff97b13e87f8a/@6d38bc78>
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Abuse.ch service is disabled, not starting adapter. To enable it please go to System / Configurations.
	at org.graylog.plugins.threatintel.adapters.abusech.AbuseChRansomAdapter.doStart(AbuseChRansomAdapter.java:80) ~[?:?]
	at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
	at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
	at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-01-09T11:49:33.959Z INFO  [LegacyDefaultStreamMigration] Legacy default stream has no connections, no migration needed.
2019-01-09T11:49:33.972Z INFO  [LookupTableService] Data Adapter otx-api-domain/5c30ad9291aff97b13e87f8b [@70ba9717] RUNNING
2019-01-09T11:49:33.976Z INFO  [LookupTableService] Data Adapter abuse-ch-ransomware-ip/5c30ad9291aff97b13e87f8a [@6d38bc78] RUNNING
2019-01-09T11:49:33.984Z INFO  [LookupTableService] Data Adapter otx-api-ip/5c30ad9291aff97b13e87f86 [@71a74e46] RUNNING
2019-01-09T11:49:34.068Z INFO  [LookupTableService] Cache threat-intel-uncached-adapters/5c30ad9191aff97b13e87f81 [@76779e46] STARTING
2019-01-09T11:49:34.077Z INFO  [LookupTableService] Cache spamhaus-e-drop-cache/5c30ad9191aff97b13e87f85 [@4c2f5b3a] STARTING
2019-01-09T11:49:34.078Z INFO  [LookupTableService] Cache otx-api-domain-cache/5c30ad9191aff97b13e87f83 [@320aa922] STARTING
2019-01-09T11:49:34.079Z INFO  [LookupTableService] Cache otx-api-ip-cache/5c30ad9191aff97b13e87f80 [@3b4bf3d] STARTING
2019-01-09T11:49:34.086Z INFO  [LookupTableService] Cache whois-cache/5c30ad9191aff97b13e87f84 [@b01763c] STARTING
2019-01-09T11:49:34.117Z INFO  [LookupTableService] Cache spamhaus-e-drop-cache/5c30ad9191aff97b13e87f85 [@4c2f5b3a] RUNNING
2019-01-09T11:49:34.117Z INFO  [LookupTableService] Cache otx-api-domain-cache/5c30ad9191aff97b13e87f83 [@320aa922] RUNNING
2019-01-09T11:49:34.117Z INFO  [LookupTableService] Cache otx-api-ip-cache/5c30ad9191aff97b13e87f80 [@3b4bf3d] RUNNING
2019-01-09T11:49:34.117Z INFO  [LookupTableService] Cache threat-intel-uncached-adapters/5c30ad9191aff97b13e87f81 [@76779e46] RUNNING
2019-01-09T11:49:34.118Z INFO  [LookupTableService] Cache whois-cache/5c30ad9191aff97b13e87f84 [@b01763c] RUNNING
2019-01-09T11:49:34.189Z INFO  [LookupTableService] Starting lookup table abuse-ch-ransomware-domains/5c30ad9291aff97b13e87f8f [@2aa5348e] using cache threat-intel-uncached-adapters/5c30ad9191aff97b13e87f81 [@76779e46], data adapter abuse-ch-ransomware-domains/5c30ad9291aff97b13e87f89 [@5ef81738]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table whois/5c30ad9291aff97b13e87f90 [@213f486a] using cache whois-cache/5c30ad9191aff97b13e87f84 [@b01763c], data adapter whois/5c30ad9291aff97b13e87f8d [@60a47f80]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table otx-api-domain/5c30ad9291aff97b13e87f91 [@37c8f247] using cache otx-api-domain-cache/5c30ad9191aff97b13e87f83 [@320aa922], data adapter otx-api-domain/5c30ad9291aff97b13e87f8b [@70ba9717]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table tor-exit-node-list/5c30ad9291aff97b13e87f92 [@55067cd2] using cache threat-intel-uncached-adapters/5c30ad9191aff97b13e87f81 [@76779e46], data adapter tor-exit-node/5c30ad9291aff97b13e87f88 [@5ee8e704]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table spamhaus-drop/5c30ad9291aff97b13e87f93 [@568433b8] using cache spamhaus-e-drop-cache/5c30ad9191aff97b13e87f85 [@4c2f5b3a], data adapter spamhaus-drop/5c30ad9291aff97b13e87f8c [@3eb65010]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table abuse-ch-ransomware-ip/5c30ad9291aff97b13e87f94 [@da34d1a] using cache threat-intel-uncached-adapters/5c30ad9191aff97b13e87f81 [@76779e46], data adapter abuse-ch-ransomware-ip/5c30ad9291aff97b13e87f8a [@6d38bc78]
2019-01-09T11:49:34.190Z INFO  [LookupTableService] Starting lookup table otx-api-ip/5c30ad9291aff97b13e87f95 [@7d09cc23] using cache otx-api-ip-cache/5c30ad9191aff97b13e87f80 [@3b4bf3d], data adapter otx-api-ip/5c30ad9291aff97b13e87f86 [@71a74e46]
2019-01-09T11:49:35.190Z INFO  [JerseyService] Enabling CORS for HTTP endpoint
2019-01-09T11:49:58.100Z INFO  [NetworkListener] Started listener bound to [0.0.0.0:9000]
2019-01-09T11:49:58.102Z INFO  [HttpServer] [HttpServer] Started.
2019-01-09T11:49:58.106Z INFO  [JerseyService] Started REST API at <http://0.0.0.0:9000/api/>
2019-01-09T11:49:58.106Z INFO  [JerseyService] Started Web Interface at <http://0.0.0.0:9000/>
2019-01-09T11:49:58.121Z INFO  [ServerBootstrap] Services started, startup times in ms: {OutputSetupService [RUNNING]=86, BufferSynchronizerService [RUNNING]=126, KafkaJournal [RUNNING]=135, InputSetupService [RUNNING]=235, ConfigurationEtagService [RUNNING]=307, StreamCacheService [RUNNING]=307, JournalReader [RUNNING]=313, PeriodicalsService [RUNNING]=479, LookupTableService [RUNNING]=718, JerseyService [RUNNING]=24634}
2019-01-09T11:49:58.124Z INFO  [ServiceManagerListener] Services are healthy
2019-01-09T11:49:58.124Z INFO  [InputSetupService] Triggering launching persisted inputs, node transitioned from Uninitialized [LB:DEAD] to Running [LB:ALIVE]
2019-01-09T11:49:58.128Z INFO  [ServerBootstrap] Graylog server up and running.
2019-01-09T11:49:58.160Z INFO  [InputStateListener] Input [GELF HTTP/5c3295d691aff90406a7e391] is now STARTING
2019-01-09T11:49:58.279Z WARN  [NettyTransport] receiveBufferSize (SO_RCVBUF) for input GELFHttpInput{title=CrowdStrike EDR, type=org.graylog2.inputs.gelf.http.GELFHttpInput, nodeId=null} should be 1048576 but is 212992.
2019-01-09T11:49:58.284Z INFO  [InputStateListener] Input [GELF HTTP/5c3295d691aff90406a7e391] is now RUNNING
2019-01-09T11:50:27.036Z WARN  [ProxiedResource] Unable to call http://192.168.173.65:9000/api/system/inputstates on node <e0f94c5e-6fd8-443e-9d57-a5dcf5ed563a>
java.net.SocketTimeoutException: timeout
	at okio.Okio$4.newTimeoutException(Okio.java:230) ~[graylog.jar:?]
	at okio.AsyncTimeout.exit(AsyncTimeout.java:285) ~[graylog.jar:?]
	at okio.AsyncTimeout$2.read(AsyncTimeout.java:241) ~[graylog.jar:?]
	at okio.RealBufferedSource.indexOf(RealBufferedSource.java:345) ~[graylog.jar:?]
	at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:217) ~[graylog.jar:?]
	at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:211) ~[graylog.jar:?]
	at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:187) ~[graylog.jar:?]
	at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at org.graylog2.rest.RemoteInterfaceProvider.lambda$get$0(RemoteInterfaceProvider.java:61) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) ~[graylog.jar:?]
	at okhttp3.RealCall.execute(RealCall.java:77) ~[graylog.jar:?]
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:180) ~[graylog.jar:?]
	at org.graylog2.shared.rest.resources.ProxiedResource.lambda$getForAllNodes$0(ProxiedResource.java:76) ~[graylog.jar:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204) ~[?:1.8.0_191]
	at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_191]
	at okio.Okio$2.read(Okio.java:139) ~[graylog.jar:?]
	at okio.AsyncTimeout$2.read(AsyncTimeout.java:237) ~[graylog.jar:?]
	... 28 more
2019-01-09T11:50:28.615Z WARN  [ProxiedResource] Unable to call http://192.168.173.65:9000/api/system/inputstates on node <e0f94c5e-6fd8-443e-9d57-a5dcf5ed563a>
java.net.SocketTimeoutException: timeout
	at okio.Okio$4.newTimeoutException(Okio.java:230) ~[graylog.jar:?]
	at okio.AsyncTimeout.exit(AsyncTimeout.java:285) ~[graylog.jar:?]
	at okio.AsyncTimeout$2.read(AsyncTimeout.java:241) ~[graylog.jar:?]
	at okio.RealBufferedSource.indexOf(RealBufferedSource.java:345) ~[graylog.jar:?]
	at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:217) ~[graylog.jar:?]
	at okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:211) ~[graylog.jar:?]
	at okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:187) ~[graylog.jar:?]
	at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:88) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at org.graylog2.rest.RemoteInterfaceProvider.lambda$get$0(RemoteInterfaceProvider.java:61) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) ~[graylog.jar:?]
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) ~[graylog.jar:?]
	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) ~[graylog.jar:?]
	at okhttp3.RealCall.execute(RealCall.java:77) ~[graylog.jar:?]
	at retrofit2.OkHttpCall.execute(OkHttpCall.java:180) ~[graylog.jar:?]
	at org.graylog2.shared.rest.resources.ProxiedResource.lambda$getForAllNodes$0(ProxiedResource.java:76) ~[graylog.jar:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Caused by: java.net.SocketException: Socket closed
	at java.net.SocketInputStream.read(SocketInputStream.java:204) ~[?:1.8.0_191]
	at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_191]
	at okio.Okio$2.read(Okio.java:139) ~[graylog.jar:?]
	at okio.AsyncTimeout$2.read(AsyncTimeout.java:237) ~[graylog.jar:?]
	... 28 more
1 Like

Thanks for putting in the effort @hkelley :slight_smile:

What Konrad meant when he wrote:

… is, please go to the Graylog GUI, browse System > Inputs > Your GELF input and then screenshot the current config. Or just type the fields into a table here, if that’s easier :wink:

Also, this box:
http://192.168.173.65:9000/api/system/inputstates

I assume that’s your Graylog box, right?

  • Is the IP actually correct?
  • Can you confirm whether the Graylog GUI is still available once those errors show up? It runs on :9000 as well.
  • Can you confirm with curl whether the API URL is really not reachable once those errors show up?
1 Like

Will post screenshot shortly. In the interim,

  • IP is correct
  • Remote client can still reach UI and API browser post-error

Can you reliably reproduce the ProxiedResource error by sending that message through Curl? Because then obviously it would be related.

Yes, that error is the one that pops up just once after the POST.

Input config for GELF HTTP

bind_address:
 0.0.0.0
decompress_size_limit:
 8388608
enable_cors:
 true
idle_writer_timeout:
 60
max_chunk_size:
 65536
override_source:
 <empty>
port:
 12201
recv_buffer_size:
 1048576
tcp_keepalive:
 false
tls_cert_file:
 <empty>
tls_client_auth:
 disabled
tls_client_auth_cert_file:
 <empty>
tls_enable:
 false
tls_key_file:
 <empty>
tls_key_password:
 ********
1 Like

That’s just the darnedest thing :smiley:

I try to reproduce on 3.0 beta and get also an error but a different one. Tomorrow I will try to understand that first, and after that will go to 2.5 and take a look there.

1 Like

I can’t reproduce your problem with the same config.

I setup the GELF HTTP Input on 2.5.1 with the default config (looked like you did the same).

I was able to successfully run this command:
curl -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "example.org", "short_message": "A short message", "level": 5, "_some_info": "foo" }' 'http://localhost:12201/gelf'

I assume you did the same?

I’m not sure that @hkelley did… For starters they’re using Python, but perhaps the method they’re calling is not 100% comparable to your example. Unfortunately we can’t be sure, because for starters the called URL is not shown in the example in the top post.

1 Like

I have reinstalled Graylog. The /var/lib/graylog-server/journal folder was missing so I wanted to start over.

The local curl POST now lands in the index. The remote post from python is landing in the /var/lib/graylog-server/journal/messagejournal-0 .log file but is not hitting the index.

I will investigate further tomorrow.

1 Like

But dude, that’s totally progress :slight_smile:

Good on you!

Yes, progress.

Where do you suggest I look to determine why the messages don’t leave the journal? Should the journal message-n files be emptied as the messages are indexed, or does it roll into a new message files as it goes?

The journal keeps messages as long as the connection to ElasticSearch is broken.

Under normal conditions the journal is continuously flushed, every few seconds, pushing messages through the outputs. For example into ElasticSearch. If the journal keeps growing, there are problems in talking to the Elastic backend.