Edit Event Definition page not show

1. Describe your incident:
We have many alarms. When I want to open the Edit Event Definition page to make changes on one of these alarms, the page stays like this and I cannot view the alarm content.

I can’t search for the alarm under the alarms page either. the page is not updating.
this page not loading

3. What steps have you already taken to try and solve the problem?
we try

  • update upgrade
  • update ubuntu version
  • update graylog server version not solve.

4. How can the community help?

how can we fix it. this is important to us?

thanks.

Are there any changes to the system that were made before this issue happened?

I would start looking at Graylog logs to see what that is telling me - post up relevant parts using the </> forum tool to make it readable. There are a bunch of basic command lines here that will will give you some diagnostic information as well as help you to give more information to the community so e we can help solve your problem.

We didn’t make any changes. We made changes to fix the problem.

tail -f /var/log/graylog-server/server.log
Caused by: ElasticsearchException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]];
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:603)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.itemFromXContent(MultiSearchResponse.java:215)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.lambda$static$1(MultiSearchResponse.java:56)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObjectArray$13(AbstractObjectParser.java:254)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$22(AbstractObjectParser.java:300)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.parseArray(AbstractObjectParser.java:382)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$23(AbstractObjectParser.java:300)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$9(ObjectParser.java:386)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:529)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseArray(ObjectParser.java:523)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:555)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:324)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:171)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ConstructingObjectParser.apply(ConstructingObjectParser.java:163)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.fromXContext(MultiSearchResponse.java:194)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$8(RestHighLevelClient.java:1554)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1630)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.msearch(RestHighLevelClient.java:1118)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.lambda$msearch$2(ElasticsearchClient.java:74)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.execute(ElasticsearchClient.java:98)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.msearch(ElasticsearchClient.java:74)
	at org.graylog.storage.elasticsearch7.views.ElasticsearchBackend.doRun(ElasticsearchBackend.java:239)
	... 8 more

tail -f /var/log/elasticsearch/graylog.log 
[2022-04-12T15:12:46,905][INFO ][o.e.c.m.MetadataMappingService] [sclog] [sc-wazuh_63/OhmwHxPpQo-J72XlvhzxtQ] update_mapping [_doc]
[2022-04-12T15:38:48,900][INFO ][o.e.c.m.MetadataMappingService] [sclog] [sc-wazuh_63/OhmwHxPpQo-J72XlvhzxtQ] update_mapping [_doc]
[2022-04-12T15:38:48,947][INFO ][o.e.c.m.MetadataMappingService] [sclog] [graylog_517/L7PLtOKBTNOsg8gfU5_-wg] update_mapping [_doc]
[2022-04-12T15:38:48,950][INFO ][o.e.c.m.MetadataMappingService] [sclog] [sc-wazuh_63/OhmwHxPpQo-J72XlvhzxtQ] update_mapping [_doc]
[2022-04-12T15:38:48,955][INFO ][o.e.c.m.MetadataMappingService] [sclog] [graylog_517/L7PLtOKBTNOsg8gfU5_-wg] update_mapping [_doc]
[2022-04-12T15:39:06,875][INFO ][o.e.c.m.MetadataMappingService] [sclog] [sc-wazuh_63/OhmwHxPpQo-J72XlvhzxtQ] update_mapping [_doc]
[2022-04-12T15:39:06,911][INFO ][o.e.c.m.MetadataMappingService] [sclog] [graylog_517/L7PLtOKBTNOsg8gfU5_-wg] update_mapping [_doc]
[2022-04-12T15:40:02,887][INFO ][o.e.c.m.MetadataMappingService] [sclog] [nginxaccess_1975/cusKvUUMQYalsLI76MdcRg] update_mapping [_doc]
[2022-04-12T15:40:02,926][INFO ][o.e.c.m.MetadataMappingService] [sclog] [nginxaccess_1975/cusKvUUMQYalsLI76MdcRg] update_mapping [_doc]
[2022-04-12T15:40:02,929][INFO ][o.e.c.m.MetadataMappingService] [sclog] [nginxaccess_1975/cusKvUUMQYalsLI76MdcRg] update_mapping [_doc]

That immediately points to Elasticsearch having issues - if all shards are failing Graylog is going to be pretty unhappy. There are some starting commands on the page I linked to look at Elasticsearch health …

curl -XGET http://localhost:9200/_cluster/allocation/explain?pretty

You may also want to post what versions of Graylog/MongoDB/Elasticsearch you have to help with knowing where you are. Not particularly relevant, but it is asked of all who ask questions as it sometimes can be…

dpkg -l | grep -E ".*(elasticsearch|graylog|mongo).*"
--or--
yum list installed | grep -E ".*(elasticsearch|graylog|mongo).*"

when i update graylog server i check elastic version but i didn’t check healty status. Now we have elastic healty problem. Page not loading due to database? (elastik or mongodb)

curl -XGET http://localhost:9200/_cluster/allocation/explain?pretty
{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
      }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "unable to find any unassigned shards to explain [ClusterAllocationExplainRequest[useAnyUnassignedShard=true,includeYesDecisions?=false]"
  },
  "status" : 400
}

dpkg -l | grep -E ".*(elasticsearch|graylog|mongo).*"
ii  elasticsearch-oss                       7.10.2                                  amd64        Distributed RESTful search engine built for the cloud
ic  graylog-3.2-repository                  1-1                                     all          Package to install Graylog 3.2 GPG key and repository
ii  graylog-3.3-repository                  1-1                                     all          Package to install Graylog 3.3 GPG key and repository
ii  graylog-enterprise-integrations-plugins 4.2.7-1                                 all          Graylog Enterprise Integrations plugins
ii  graylog-enterprise-plugins              4.2.7-1                                 all          Graylog Enterprise plugins
ii  graylog-integrations-plugins            4.2.7-1                                 all          Graylog Integrations plugins
ii  graylog-server                          4.2.7-1                                 all          Graylog server
ii  graylog-sidecar                         1.0.2-1.rc.1                            amd64        Graylog collector sidecar
ii  graylog-sidecar-repository              1-2                                     all          Package to install Graylog Sidecar GPG key and repository
ii  mongodb-org                             4.0.28                                  amd64        MongoDB open source document-oriented database system (metapackage)
ii  mongodb-org-mongos                      4.0.28                                  amd64        MongoDB sharded cluster query router
ii  mongodb-org-server                      4.0.28                                  amd64        MongoDB database server
ii  mongodb-org-shell                       4.0.28                                  amd64        MongoDB shell client
ii  mongodb-org-tools                       4.0.28                                  amd64        MongoDB tools

Also try:

curl -XGET http://localhost:9200/_cluster/health?pretty=true

and maybe just list out the indices:

curl -XGET http://localhost:9200/_cat/indices?pretty

maybe our problem is due to mongodb

curl -XGET http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "graylog",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 242,
  "active_shards" : 242,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

curl -XGET http://localhost:9200/_cat/indices?pretty
green open gl-system-events_23 vXbFdqoiTESw22IyTr_lwg 4 0       0 0      1kb      1kb
green open gl-system-events_22 _yH9-eHgQqOywIhol4AcGA 4 0       0 0      1kb      1kb
green open zbackend_609        k_CJKKTcTBiN7q0o-KWJ9w 4 0  239767 0  119.6mb  119.6mb
green open gl-system-events_21 qr233H6JSHW4WypqdjxApQ 4 0       0 0      1kb      1kb
green open gl-system-events_20 Y54jxcy3TyqcFCsXvwH1jw 4 0       0 0      1kb      1kb
green open sclog_227           BxQZGntMSK--Whe1aRnZ0A 4 0       0 0      1kb      1kb
green open sclog_228           MfuzEpLVTkWZAOnjxwuztA 4 0       0 0      1kb      1kb
green open sclog_229           m6RhH2AORYKjDpk0Q1NZSw 4 0       0 0      1kb      1kb
green open graylog_510         D2mi5vdXTSa8leLs-o_zYQ 4 0  899003 0  999.7mb  999.7mb
green open gl-failures_4       s4zqURBWQ9GKSLi5xwzVxg 2 0       0 0     416b     416b
green open graylog_511         k_0AGoN2QN2xvjaMZEaWag 4 0 1167732 0  996.8mb  996.8mb
green open gl-failures_3       Gfi25wCiTMueWmvyavw3JA 2 0       0 0     416b     416b
green open gl-failures_2       mKLo6U7uQluwNIv4wjKg5w 2 0       0 0     416b     416b
green open gl-failures_1       u7r-5EBKROW1LR6Rp8yg7Q 2 0       0 0     416b     416b
green open gl-system-events_19 YDbiJ3miTLOe5e1eP6swYA 4 0       0 0      1kb      1kb
green open gl-failures_0       QS8GktOsTQGFtHoTS1KfWw 2 0       0 0     416b     416b
green open gl-system-events_16 VntZilzVTNeUtN3X3ra4rg 4 0       0 0      1kb      1kb
green open gl-system-events_15 2prP-i3nQ6G9AGWozt7zpg 4 0       0 0      1kb      1kb
green open graylog_516         -NKGq9uYT5WOjbUxiL0TJA 4 0  798074 0  980.7mb  980.7mb
green open gl-system-events_18 rstLyHmVSCaQ2JeyyCTeMQ 4 0       0 0      1kb      1kb
green open graylog_517         L7PLtOKBTNOsg8gfU5_-wg 4 0  718722 0  983.9mb  983.9mb
green open gl-system-events_17 Rzx347CwRrqSQCacyC7vAg 4 0       0 0      1kb      1kb
green open gl-system-events_12 00io72YRSO6w9ascvaaqYg 4 0       0 0      1kb      1kb
green open graylog_514         O_aaBV0uTmyNc8P2hGUOTQ 4 0  681090 0  982.5mb  982.5mb
green open graylog_515         3eCOrIIJTTKxSkK866ndPA 4 0  686626 0  991.2mb  991.2mb
green open gl-system-events_14 B_CZMC4uT3eqiWhNEDbxPA 4 0       0 0      1kb      1kb
green open graylog_512         zkTGFvG1RB-hfGbtLN71jA 4 0  767524 0  795.8mb  795.8mb
green open gl-system-events_13 LYTrrBtDRduemY4XUmavUQ 4 0       0 0      1kb      1kb
green open graylog_513         a8a1lHj3SxiBGXV8FPaIKA 4 0 1038047 0  885.3mb  885.3mb
green open wazuh_60         aWd_U5psQP2EmpAnAU5SOg 4 0  676806 0 1002.5mb 1002.5mb
green open wazuh_63         OhmwHxPpQo-J72XlvhzxtQ 4 0  239936 0  315.9mb  315.9mb
green open wazuh_62         Rq1fN2buQNuEP2asWs7ZRQ 4 0  714974 0  906.1mb  906.1mb
green open graylog_509         aBs-5xIoThSxLw_Xt52Bcg 4 0  890648 0  920.2mb  920.2mb
green open wazuh_61         QjWyiPsESoOy9c-V9ACLHg 4 0  732582 0  834.2mb  834.2mb
green open graylog_507         Edp7CwL0SbOdUDodvNtZog 4 0 1002660 0  992.1mb  992.1mb
green open graylog_508         tgONuq-RRUG4GxTWsL1EWg 4 0  882674 0 1003.8mb 1003.8mb
green open graylog_505         i5C4ExmzQrSbwPR2gZ9GrA 4 0 1262530 0  979.9mb  979.9mb
green open graylog_506         8xLFg9kCTtqnTZ_UhZZIvw 4 0  845493 0  965.7mb  965.7mb
green open graylog_503         u-dcia3pSymUyf7fxsL7Sw 4 0  903081 0 1010.4mb 1010.4mb
green open graylog_504         bL7Ve1CjT1qVmYD1On7law 4 0 1020926 0    989mb    989mb
green open graylog_502         AdzbsKP1RGqnBfQoOF1juw 4 0  861886 0  921.5mb  921.5mb
green open gl-events_12        sWUwvbqAS5eMxt6nKVqXoA 4 0  415858 0  262.5mb  262.5mb
green open gl-events_11        Ac0Ee-3YSVW_cG0xAkN9GA 4 0  226921 0   75.1mb   75.1mb
green open gl-events_10        bJksClvsQw-lBZNhan4RFA 4 0  264575 0  108.3mb  108.3mb
green open wazuh_59         nT-3bszMQR2z8PrmCXXxEA 4 0  653479 0 1011.4mb 1011.4mb
green open gl-events_9         gRfncb-3SNWnU05D0b8E5A 4 0  141447 0   48.2mb   48.2mb
green open gl-events_16        6gg78OTCSr2G46hAOV0NRw 4 0  509007 0  246.1mb  246.1mb
green open gl-events_15        Rs-wZK_kSZ6yg2gnLFerXQ 4 0  638288 0  267.9mb  267.9mb
green open gl-events_14        pZ2w1yJ3QAeG4RDWqb69rA 4 0  695437 0  370.7mb  370.7mb
green open gl-events_13        21ejUBkHSXSV8bfU-_dx2A 4 0  448786 0  321.4mb  321.4mb
green open gl-events_19        HwW0zCbpTiqgD8V6xk2cBA 4 0   21000 0    1.1gb    1.1gb
green open gl-events_18        jUcFNUlAR9ywHVcsz3q9JA 4 0   25668 0    1.1gb    1.1gb
green open gl-events_17        QqAxw3NxRwy-8g8L2vTuvg 4 0  164245 0    3.9gb    3.9gb
green open log_230           2_U8BlsbQbia5ki85HLfFw 4 0       0 0      1kb      1kb
green open log_231           5wDkJa63Q_G8nKk87cfe_g 4 0       0 0     920b     920b
green open log_232           kK7o02XSR_WChzC6vFeVBg 4 0       0 0      1kb      1kb
green open log_233           HWkg_AMMTLWkKzPQ7qKhnQ 4 0       0 0     832b     832b
green open gl-events_20        kKfKORcDRQOlPmWS9IRUjg 4 0    7614 0  138.1mb  138.1mb
green open nginxaccess_1978    Yjc8bOk0RHuAo2YDjQb1Ig 4 0  341457 0  390.6mb  390.6mb
green open log_234           9bzsxa9gSneUDUSVWIm2Bw 4 0       0 0     832b     832b
green open log_235           yTfRuG1PQOe3df7sCMa40A 4 0       0 0     832b     832b
green open log_236           2ILvkL06S6GCw-WlfFf3pQ 4 0       0 0     832b     832b
green open nginxaccess_1977    6gz3s65jReWDBgj3cCviMQ 4 0  793084 0  883.2mb  883.2mb

What makes you say MongoDB? Have you looked in the logs there? Is there anything else coming up in the Graylog logs? what are your graylog conf file settings?

I didn’t change anything. If the alert information is pulling from the database, the problem is elastic or mongo. The page opened in some alarms does not load on other pages.

Our config file has been the same for a long time, there is no change.

systemctl status graylog-server
graylog-server.service - Graylog server
     Loaded: loaded (/lib/systemd/system/graylog-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-04-08 17:28:02 +03; 5 days ago
       Docs: http://docs.graylog.org/
   Main PID: 271798 (graylog-server)
      Tasks: 219 (limit: 19032)
     Memory: 4.6G
        CPU: 1d 20h 23min 43.533s
     CGroup: /system.slice/graylog-server.service
             ├─271798 /bin/sh /usr/share/graylog-server/bin/graylog-server
             └─271967 /usr/bin/java -Xms4g -Xmx4g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnl>

Apr 08 17:28:02 sclog systemd[1]: Started Graylog server.
Apr 08 17:30:05 sclog graylog-server[271967]: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
Apr 08 17:30:05 sclog graylog-server[271967]: SLF4J: Defaulting to no-operation (NOP) logger implementation
Apr 08 17:30:05 sclog graylog-server[271967]: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

tail -f /var/log/graylog-server/server.log
2022-04-13T17:40:46.569+03:00 ERROR [PivotAggregationSearch] Aggregation search query <query-1> returned an error: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.].
ElasticsearchException{message=Search type returned error: , errorDetails=[]}
	at org.graylog.storage.elasticsearch7.views.ElasticsearchBackend.doRun(ElasticsearchBackend.java:255)
	at org.graylog.storage.elasticsearch7.views.ElasticsearchBackend.doRun(ElasticsearchBackend.java:69)
	at org.graylog.plugins.views.search.engine.QueryBackend.run(QueryBackend.java:83)
	at org.graylog.plugins.views.search.engine.QueryEngine.prepareAndRun(QueryEngine.java:164)
	at org.graylog.plugins.views.search.engine.QueryEngine.lambda$execute$6(QueryEngine.java:104)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: ElasticsearchException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]];
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:603)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.itemFromXContent(MultiSearchResponse.java:215)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.lambda$static$1(MultiSearchResponse.java:56)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareObjectArray$13(AbstractObjectParser.java:254)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$22(AbstractObjectParser.java:300)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.parseArray(AbstractObjectParser.java:382)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.AbstractObjectParser.lambda$declareFieldArray$23(AbstractObjectParser.java:300)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.lambda$declareField$9(ObjectParser.java:386)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseValue(ObjectParser.java:529)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseArray(ObjectParser.java:523)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parseSub(ObjectParser.java:555)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ObjectParser.parse(ObjectParser.java:324)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ConstructingObjectParser.parse(ConstructingObjectParser.java:171)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.common.xcontent.ConstructingObjectParser.apply(ConstructingObjectParser.java:163)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.action.search.MultiSearchResponse.fromXContext(MultiSearchResponse.java:194)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$8(RestHighLevelClient.java:1554)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1630)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
	at org.graylog.shaded.elasticsearch7.org.elasticsearch.client.RestHighLevelClient.msearch(RestHighLevelClient.java:1118)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.lambda$msearch$2(ElasticsearchClient.java:74)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.execute(ElasticsearchClient.java:98)
	at org.graylog.storage.elasticsearch7.ElasticsearchClient.msearch(ElasticsearchClient.java:74)
	at org.graylog.storage.elasticsearch7.views.ElasticsearchBackend.doRun(ElasticsearchBackend.java:239)
	... 8 more
	Suppressed: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [message] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]]
		at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:496)
		at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:407)
		at org.graylog.shaded.elasticsearch7.org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:469)
		... 33 more

I am having the same problem myself, logs are looking the same on my infra also, I can’t seem to load 4th alert page and when I try to edit the alert definition that is on the 4th page from a saved link it fails to load.

Here is an example someone posted a while back about having an issue in alerts. It shows the log sample that points to it that you can search for but unfortunately it doesn’t go into how to pull it out of Mongo. More searching might show how to do that…

@rabia @Hjalti

Example of what @tmacgbay suggested is shown below.

In MongoDb Collections are like tables in MySQL.

[root@graylog] # mongo 
> show dbs; < --- Find Graylog database
> use graylog; < --- Use Graylogs database
> show collections;  < -- Show all tables from Graylogs database
> db.getCollectionInfos( { name: "event_definitions" } ) < -- Example of  showing a table in MongoDb

Now you know where to :eyes: to find the problematic Alert/s.

Perhaps this post may help.

php - Elasticsearch - Want to sort by field in all indices where that particular field available or not if not then avoid it - Stack Overflow

1 Like

we have all alerts but we can not search or update or see allert page.

when i export content package ı see all allert. And ı can see on db but ı can not see web ui.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.