Graylog ist eine vollständig integrierte Open-Source Protokoll-Management-Plattform für die Erfassung, Indizierung und Analyse von strukturierten und unstrukturierten Daten aus nahezu jeder Quelle. Graylog benötigt nachfolgende externe Komponenten:
Ab hier werden zur Ausführung nachfolgender Befehle root-Rechte benötigt. Um der Benutzer root zu werden, melden Sie sich bitte als root-Benutzer am System an, oder wechseln mit nachfolgendem Befehl zum Benutzer root:
$ su -
Password:
Voraussetzungen
Nachfolgende Voraussetzungen müssen vor der Installation von Graylog erfüllt sein, damit Graylog betrieben werden kann:
Installiertes JAVA z.B. OpenJDK ab der Version 8
Lauffähiger Datenbank-Server MongoDBab der Version 5.0.7 max. 7.x
Lauffähiger Such-Server Elasticsearchab der Version 6.x oder 7.x - bis Version 5.x.x
Lauffähiger Such-Server Opensearchab der Version 1.1.x max. 2.13.x
Vorbereitung
Zur Installation von Graylog als pacman-Paket, müssen nachfolgende Repositories genutzt bzw. eingebunden werden:
Abhängigkeit von OpenJDK und muss nicht explizit installiert werden
jdk11-openjdk - ist im extra-Repository von ArchLinux enthalten.
Installation eines Passwort-Generators, hier:
pwgen - ist im community-Repository von ArchLinux enthalten.
# pacman -Qil mongodb-tools
Name : mongodb-tools
Version : 1:100.7.0-1
Description : Import, export, and diagnostic tools for MongoDB
Architecture : x86_64
URL : https://github.com/mongodb/mongo-tools
Licenses : Apache
Groups : None
Provides : None
Depends On : glibc krb5
Optional Deps : None
Required By : None
Optional For : mongodb-bin
Conflicts With : None
Replaces : None
Installed Size : 88.70 MiB
Packager : Unknown Packager
Build Date : Sun 14 May 2023 07:35:37 PM CEST
Install Date : Sun 14 May 2023 07:36:40 PM CEST
Install Reason : Explicitly installed
Install Script : No
Validated By : None
mongodb-tools /usr/
mongodb-tools /usr/bin/
mongodb-tools /usr/bin/bsondump
mongodb-tools /usr/bin/mongodump
mongodb-tools /usr/bin/mongoexport
mongodb-tools /usr/bin/mongofiles
mongodb-tools /usr/bin/mongoimport
mongodb-tools /usr/bin/mongorestore
mongodb-tools /usr/bin/mongostat
mongodb-tools /usr/bin/mongotop
MongoDB: Dienst/Deamon-Start einrichten
Um das Datenbank-System MongoDB, welches als Dienst/Deamon als Hintergrundprozess läuft, auch nach einem Neustart des Servers zur Verfügung zu haben, soll der Dienst/Daemon mit dem Server mit gestartet werden, was mit nachfolgendem Befehl realisiert werden kann:
# systemctl enable mongodb.service
Created symlink /etc/systemd/system/multi-user.target.wants/mongodb.service → /usr/lib/systemd/system/mongodb.service.
Eine Überprüfung, ob beim Neustart des Server der mongod-Dienst/Deamon wirklich mit gestartet wird, kann mit nachfolgendem Befehl erfolgen und sollte eine Anzeige, wie ebenfalls nachfolgend dargestellt ausgeben:
Danach kann der mongodb-Server mit nachfolgendem Befehle gestartet werden:
# systemctl start mongodb.service
Mit nachfolgendem Befehl kann der Status des MongoDB-Servers abgefragt werden:
# systemctl status mongodb.service
● mongodb.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor p>
Active: active (running) since Sat 2022-01-29 06:43:59 CET; 7s ago
Docs: https://docs.mongodb.org/manual
Main PID: 41061 (mongod)
Memory: 58.9M
CPU: 577ms
CGroup: /system.slice/mongodb.service
└─41061 /usr/bin/mongod --config /etc/mongodb.conf
Jan 29 06:43:59 server systemd[1]: Started MongoDB Database Server.
Konfiguration: MongoDB
Nach der erfolgreichen Installation von MongoDB sollte noch ein Administrator für alle MongoDB-Server-Datenbanken erstellt werden und ein spezieller Benutzer, welcher über Lese- und Schreibrechte für die MongoDB-Datenbank graylog erhalten soll, damit über diesen eine Authentifizierung an der MongoDB erfolgten kann und dies nicht mehr ohne Benutzernamen und Passwort erfolgen kann.
Benutzer: Administrator anlegen
Zuerst muss in die MongoDB-Console mithilfe des nachfolgenden Befehls gewechselt werden, damit Datenbank spezifische Befehle gegen die MongoDB abgesetzt werden können:
# mongo
MongoDB shell version v4.4.12
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("14c2360a-b387-420e-9dba-e11da8ce2479") }
MongoDB server version: 4.4.12
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
---
The server generated these startup warnings when booting:
2022-01-29T06:44:00.460+01:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
Anschließend muss die in die interne Verwaltungsdatenbank von MongoDB gewechselt werden, was mit nachfolgendem Befehl durchgeführt werden kann:
> use admin
switched to db admin
Nun soll ein Administrationsbenutzer für die MongoDB-Server angelegt, was mit nachfolgendem Befehl durchgeführt werden kann:
Abschließend wird die MongoDB-Console mithilfe des nachfolgenden Befehls beendet:
> exit
bye
MongoDB: Stopp
Jetzt muss der mongodb-Server mit nachfolgendem Befehle gestoppt werden:
# systemctl stop mongodb.service
Mit nachfolgendem Befehl kann der Status des MongoDB-Servers abgefragt werden:
# systemctl status mongodb.service
○ mongodb.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor p>
Active: inactive (dead) since Fri 2022-01-29 07:18:15 CET; 4s ago
Docs: https://docs.mongodb.org/manual
Process: 5957 ExecStart=/usr/bin/mongod --config /etc/mongodb.conf (code=ex>
Main PID: 5957 (code=exited, status=0/SUCCESS)
CPU: 3.169s
Jan 29 07:18:00 server systemd[1]: Started MongoDB Database Server.
Jan 29 07:18:15 server systemd[1]: Stopping MongoDB Database Server...
Jan 29 07:18:15 server systemd[1]: mongodb.service: Deactivated successfully.
Jan 29 07:18:15 server systemd[1]: Stopped MongoDB Database Server.
Jan 29 07:18:15 server systemd[1]: mongodb.service: Consumed 3.169s CPU time.
/etc/mongodb.conf - IPv6 aktivieren
Nachfolgende Änderung ermöglicht, dass die MongoDB auch via IPv6 erreichbar ist. Dies ist erforderlich, wenn keine Anpassungen am Startverhalten von Graylog durchgeführt werden sollen!
Nachfolgende Änderungen müssen dafür in der Konfigurationsdatei
Danach kann der mongodb-Server mit nachfolgendem Befehle wieder gestartet werden:
# systemctl start mongodb.service
Mit nachfolgendem Befehl kann der Status des MongoDB-Servers abgefragt werden:
# systemctl status mongodb.service
● mongodb.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongodb.service; enabled; vendor p>
Active: active (running) since Sat 2022-01-29 07:25:40 CET; 7s ago
Docs: https://docs.mongodb.org/manual
Main PID: 41255 (mongod)
Memory: 156.6M
CPU: 647ms
CGroup: /system.slice/mongodb.service
└─41255 /usr/bin/mongod --config /etc/mongodb.conf
Jan 29 07:25:40 server systemd[1]: Started MongoDB Database Server.
Benutzer: "grayloguser" anlegen
Zuerst muss wieder in die MongoDB-Console mithilfe des nachfolgenden Befehls gewechselt werden, damit Datenbank spezifische Befehle gegen die MongoDB abgesetzt werden können:
# mongo --authenticationDatabase admin -u admin -p
MongoDB shell version v4.4.12
Enter password:
connecting to: mongodb://127.0.0.1:27017/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4447a6f0-86d3-4a8c-8b56-2b887f862fc8") }
MongoDB server version: 4.4.12
Anschließend muss die Datenbank von MongoDB gewechselt werden, für die eine Benutzer mit Passwort angelegt werden soll, was mit nachfolgendem Befehl durchgeführt werden kann:
> use graylog
switched to db graylog
Nun soll ein „grayloguser“-Benutzer für die MongoDB-Server angelegt, was mit nachfolgendem Befehl durchgeführt werden kann:
Mit nachfolgendem Befehl wird ein Verzeichnis erstellt, in dem das AUR-Paket - elasticsearch-xpack installiert werden kann
# mkdir /var/cache/makepkg
Anschliessend sollen die Besitzrechte an dem Verzeichnis einem unprivilegiertem Benutzer - hier: klaus übertragen werden, da die spätere Ausführung des Befehls makepg nur durch einen unprivilegierten Benutzer erfolgen kann:
# chown klaus:klaus /var/cache/makepkg
Anschliessend soll wieder zum Benutzer, hier: klaus gewechselt werden, was mit nachfolgendem Befehl durchführt werden kann:
# exit
logout
Als Benutzer, hier: klaus kann nun in das Verzeichnis /var/cache/makepkg gewechselt werden:
$ cd /var/cache/makepkg
Mit nachfolgendem Befehl kann nun das AUR-Repository-Paket elasticsearch-xpack heruntergeladen werden:
Anschliessend muss in das so neu entstandene Verzeichnis /var/cache/makepkg/elasticsearch-xpack mit nachfolgendem Befehl gewechselt werden:
$ cd /var/cache/makepkg/elasticsearch-xpack
Der nachfolgende Befehl listet den Inhalt des Verzeichnisses /var/cache/makepkg/elasticsearch-xpack auf:
$ ls -l
total 44
-rw-r--r-- 1 klaus klaus 218 Jan 29 08:17 elasticsearch.default
-rw-r--r-- 1 klaus klaus 1668 Jan 29 08:17 elasticsearch-env
-rw-r--r-- 1 klaus klaus 261 Jan 29 08:17 elasticsearch-keystore.service
-rw-r--r-- 1 klaus klaus 311 Jan 29 08:17 elasticsearch-keystore@.service
-rw-r--r-- 1 klaus klaus 1844 Jan 29 08:17 elasticsearch.service
-rw-r--r-- 1 klaus klaus 1879 Jan 29 08:17 elasticsearch@.service
-rw-r--r-- 1 klaus klaus 23 Jan 29 08:17 elasticsearch-sysctl.conf
-rw-r--r-- 1 klaus klaus 345 Jan 29 08:17 elasticsearch-tmpfile.conf
-rw-r--r-- 1 klaus klaus 39 Jan 29 08:17 elasticsearch-user.conf
-rw-r--r-- 1 klaus klaus 4949 Jan 29 08:17 PKGBUILD
Als letzten vorbereitenden Schritt, soll nur noch der GPG-Schlüssel von Elasticsearch (Elasticsearch Signing Key) dev_ops@elasticsearch.org mit nachfolgendem Befehl importiert werden, um die Authentizität des Paketes zu bestätigen:
$ curl -sS https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --import -
gpg: directory '/home/klaus/.gnupg' created
gpg: keybox '/home/klaus/.gnupg/pubring.kbx' created
gpg: /home/klaus/.gnupg/trustdb.gpg: trustdb created
gpg: key D27D666CD88E42B4: public key "Elasticsearch (Elasticsearch Signing Key) <dev_ops@elasticsearch.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
Nachfolgender Befehl erzeugt nun das pacman-Paket, welches später, bewusst mit einem zweiten Befehl installiert werden soll:
WICHTIG - Der verwendete Benutzer, hier: klausmuss das sudo-Recht besitzen!
$ makepkg -cCfs
==> Making package: elasticsearch-xpack 7.16.3-1 (Sat 29 Jan 2022 08:26:56 AM CET)
==> Checking runtime dependencies...
==> Installing missing dependencies...
[sudo] password for klaus:
:: There are 3 providers available for java-runtime-headless:
:: Repository extra
1) jre-openjdk-headless 2) jre11-openjdk-headless 3) jre8-openjdk-headless
Enter a number (default=1): 2
resolving dependencies...
looking for conflicting packages...
warning: dependency cycle detected:
warning: harfbuzz will be installed before its freetype2 dependency
Package (12) New Version Net Change Download Size
extra/freetype2 2.11.1-1 1.59 MiB 0.48 MiB
extra/graphite 1:1.3.14-1 0.67 MiB 0.22 MiB
extra/harfbuzz 3.2.0-1 5.45 MiB 0.91 MiB
extra/java-runtime-common 3-3 0.01 MiB 0.00 MiB
extra/lcms2 2.12-1 0.65 MiB 0.21 MiB
extra/libjpeg-turbo 2.1.2-1 2.02 MiB 0.41 MiB
extra/libnet 1:1.1.6-1 0.30 MiB 0.09 MiB
extra/libpng 1.6.37-3 0.55 MiB 0.24 MiB
extra/libtiff 4.3.0-1 2.82 MiB 0.85 MiB
core/nspr 4.33-1 0.72 MiB 0.19 MiB
core/nss 3.74-1 4.85 MiB 1.52 MiB
extra/jre11-openjdk-headless 11.0.13.u8-1 157.52 MiB 35.24 MiB
Total Download Size: 40.36 MiB
Total Installed Size: 177.14 MiB
:: Proceed with installation? [Y/n] Y
:: Retrieving packages...
jre11-openjdk-he... 35.2 MiB 12.1 MiB/s 00:03 [######################] 100%
nss-3.74-1-x86_64 1552.6 KiB 10.8 MiB/s 00:00 [######################] 100%
harfbuzz-3.2.0-1... 927.6 KiB 11.3 MiB/s 00:00 [######################] 100%
libtiff-4.3.0-1-... 869.1 KiB 9.43 MiB/s 00:00 [######################] 100%
freetype2-2.11.1... 488.8 KiB 9.55 MiB/s 00:00 [######################] 100%
libjpeg-turbo-2.... 422.0 KiB 8.24 MiB/s 00:00 [######################] 100%
libpng-1.6.37-3-... 245.9 KiB 6.00 MiB/s 00:00 [######################] 100%
graphite-1:1.3.1... 224.5 KiB 7.31 MiB/s 00:00 [######################] 100%
lcms2-2.12-1-x86_64 212.8 KiB 2.60 MiB/s 00:00 [######################] 100%
nspr-4.33-1-x86_64 197.9 KiB 6.44 MiB/s 00:00 [######################] 100%
libnet-1:1.1.6-1... 96.3 KiB 3.13 MiB/s 00:00 [######################] 100%
java-runtime-com... 4.9 KiB 244 KiB/s 00:00 [######################] 100%
Total (12/12) 40.4 MiB 10.7 MiB/s 00:04 [######################] 100%
(12/12) checking keys in keyring [######################] 100%
(12/12) checking package integrity [######################] 100%
(12/12) loading package files [######################] 100%
(12/12) checking for file conflicts [######################] 100%
(12/12) checking available disk space [######################] 100%
:: Running pre-transaction hooks...
(1/1) Performing snapper pre snapshots for the following configurations...
==> root: 34
:: Processing package changes...
( 1/12) installing java-runtime-common [######################] 100%
For the complete set of Java binaries to be available in your PATH,
you need to re-login or source /etc/profile.d/jre.sh
Please note that this package does not support forcing JAVA_HOME as former package java-common did
( 2/12) installing nspr [######################] 100%
( 3/12) installing nss [######################] 100%
( 4/12) installing libjpeg-turbo [######################] 100%
Optional dependencies for libjpeg-turbo
java-runtime>11: for TurboJPEG Java wrapper
( 5/12) installing libtiff [######################] 100%
Optional dependencies for libtiff
freeglut: for using tiffgt
( 6/12) installing lcms2 [######################] 100%
( 7/12) installing libnet [######################] 100%
( 8/12) installing libpng [######################] 100%
( 9/12) installing graphite [######################] 100%
(10/12) installing harfbuzz [######################] 100%
Optional dependencies for harfbuzz
cairo: hb-view program
chafa: hb-view program
(11/12) installing freetype2 [######################] 100%
(12/12) installing jre11-openjdk-headless [######################] 100%
Optional dependencies for jre11-openjdk-headless
java-rhino: for some JavaScript support
:: Running post-transaction hooks...
(1/2) Arming ConditionNeedsUpdate...
(2/2) Performing snapper post snapshots for the following configurations...
==> root: 35
==> Checking buildtime dependencies...
==> Retrieving sources...
-> Downloading elasticsearch-7.16.3-x86_64.rpm...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 297M 100 297M 0 0 9340k 0 0:00:32 0:00:32 --:--:-- 10.8M
-> Downloading elasticsearch-7.16.3-x86_64.rpm.asc...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 488 100 488 0 0 9636 0 --:--:-- --:--:-- --:--:-- 9760
-> Found elasticsearch-env
-> Found elasticsearch.service
-> Found elasticsearch@.service
-> Found elasticsearch-keystore.service
-> Found elasticsearch-keystore@.service
-> Found elasticsearch-sysctl.conf
-> Found elasticsearch-user.conf
-> Found elasticsearch-tmpfile.conf
-> Found elasticsearch.default
==> Validating source files with sha512sums...
elasticsearch-7.16.3-x86_64.rpm ... Passed
elasticsearch-7.16.3-x86_64.rpm.asc ... Skipped
elasticsearch-env ... Passed
elasticsearch.service ... Passed
elasticsearch@.service ... Passed
elasticsearch-keystore.service ... Passed
elasticsearch-keystore@.service ... Passed
elasticsearch-sysctl.conf ... Passed
elasticsearch-user.conf ... Passed
elasticsearch-tmpfile.conf ... Passed
elasticsearch.default ... Passed
==> Verifying source file signatures with gpg...
elasticsearch-7.16.3-x86_64.rpm ... Passed
==> Removing existing $srcdir/ directory...
==> Extracting sources...
-> Extracting elasticsearch-7.16.3-x86_64.rpm with bsdtar
==> Starting prepare()...
==> Entering fakeroot environment...
==> Starting package()...
install: creating directory '/var/cache/makepkg/elasticsearch-xpack/pkg/elasticsearch-xpack/usr/share/licenses'
install: creating directory '/var/cache/makepkg/elasticsearch-xpack/pkg/elasticsearch-xpack/usr/share/licenses/elasticsearch-xpack'
'usr/share/elasticsearch/LICENSE.txt' -> '/var/cache/makepkg/elasticsearch-xpack/pkg/elasticsearch-xpack/usr/share/licenses/elasticsearch-xpack/LICENSE.txt'
==> Tidying install...
-> Removing libtool files...
-> Purging unwanted files...
-> Removing static library files...
-> Stripping unneeded symbols from binaries and libraries...
-> Compressing man and info pages...
==> Checking for packaging issues...
==> Creating package "elasticsearch-xpack"...
-> Generating .PKGINFO file...
-> Generating .BUILDINFO file...
-> Generating .MTREE file...
-> Compressing package...
==> Leaving fakeroot environment.
==> Finished making: elasticsearch-xpack 7.16.3-1 (Sat 29 Jan 2022 08:28:10 AM CET)
==> Cleaning up...
Das erneute Auflisten des Inhalts des Verzeichnisses /var/cache/makepkg/elasticsearch-xpack, sollte nun die erstellten pacman-Pakete entahlten, was mit nachfolgendem Befehl überprüft werden kann:
$ ls -l
total 465540
-rw-r--r-- 1 klaus klaus 311430167 Jan 29 08:28 elasticsearch-7.16.3-x86_64.rpm
-rw-r--r-- 1 klaus klaus 488 Jan 29 08:28 elasticsearch-7.16.3-x86_64.rpm.asc
-rw-r--r-- 1 klaus klaus 218 Jan 29 08:17 elasticsearch.default
-rw-r--r-- 1 klaus klaus 1668 Jan 29 08:17 elasticsearch-env
-rw-r--r-- 1 klaus klaus 261 Jan 29 08:17 elasticsearch-keystore.service
-rw-r--r-- 1 klaus klaus 311 Jan 29 08:17 elasticsearch-keystore@.service
-rw-r--r-- 1 klaus klaus 1844 Jan 29 08:17 elasticsearch.service
-rw-r--r-- 1 klaus klaus 1879 Jan 29 08:17 elasticsearch@.service
-rw-r--r-- 1 klaus klaus 23 Jan 29 08:17 elasticsearch-sysctl.conf
-rw-r--r-- 1 klaus klaus 345 Jan 29 08:17 elasticsearch-tmpfile.conf
-rw-r--r-- 1 klaus klaus 39 Jan 29 08:17 elasticsearch-user.conf
-rw-r--r-- 1 klaus klaus 165231016 Jan 29 08:28 elasticsearch-xpack-7.16.3-1-x86_64.pkg.tar.zst
-rw-r--r-- 1 klaus klaus 4949 Jan 29 08:17 PKGBUILD
Ab hier werden zur Ausführung nachfolgender Befehle root-Rechte benötigt. Um der Benutzer root zu werden, melden Sie sich bitte als root-Benutzer am System an, oder wechseln mit nachfolgendem Befehl zum Benutzer root:
$ su -
Password:
Nachfolgemder Befehl führt nun die Installation des soeben erstellten pacman-Pakets durch:
Um das Suchdienst-System Elasticsearch, welches als Dienst/Deamon als Hintergrundprozess läuft, auch nach einem Neustart des Servers zur Verfügung zu haben, soll der Dienst/Daemon mit dem Server mit gestartet werden, was mit nachfolgendem Befehl realisiert werden kann:
# systemctl enable elasticsearch.service
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.
Eine Überprüfung, ob beim Neustart des Server der elasticsearch-Dienst/Deamon wirklich mit gestartet wird, kann mit nachfolgendem Befehl erfolgen und sollte eine Anzeige, wie ebenfalls nachfolgend dargestellt ausgeben:
Bevor der Dienst/Daemon von Elasticsearch gestartet werden kann, ist nachfolgende Konfiguration in der Konfigurationsdatei
/etc/elasticsearch/elasticsearch.yml
durchzuführen:
(Komplette Konfigurationsdatei):
# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:## Tachtler# default: #cluster.name: my-application
cluster.name: graylog
## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:##node.name: node-1## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):#
path.data: /var/lib/elasticsearch
## Path to log files:#
path.logs: /var/log/elasticsearch
## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## By default Elasticsearch is only accessible on localhost. Set a different# address here to expose this node on the network:##network.host: 192.168.0.1## By default Elasticsearch listens for HTTP traffic on the first free port it# finds starting at 9200. Set a specific HTTP port here:##http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.seed_hosts: ["host1", "host2"]## Bootstrap the cluster using an initial set of master-eligible nodes:##cluster.initial_master_nodes: ["node-1", "node-2"]## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true## ---------------------------------- Security ----------------------------------## *** WARNING ***## Elasticsearch security features are not enabled by default.# These features are free, but require configuration changes to enable them.# This means that users don’t have to provide credentials and can get full access# to the cluster. Network connections are also not encrypted.## To protect your data, we strongly encourage you to enable the Elasticsearch security features. # Refer to the following documentation for instructions.## https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html# Tachtler
xpack.security.enabled: true
discovery.type: single-node
Nachfolgende Änderungen wurden durchgeführt:
cluster.name: graylog
Setzen des cluster-Namen für den Zugriff durch Graylog.
HINWEIS - Dies ist grundsätzlich die einzige relevante Änderung
xpack.security.enabled: true
Aktivieren der Sicherheitsmechanismen, welche mit dem X-Pack mitgeliefert werden.
discovery.type: single-node
Setzen des Typs, um welche Art von Installation und um welchen Betrieb es sich handelt.
Elasticsearch: Erster Start
Danach kann der elasticsearch-Server mit nachfolgendem Befehle gestartet werden:
# systemctl start elasticsearch.service
* Dies kann ein wenig dauern!
Mit nachfolgendem Befehl kann der Status des Elasticsearch-Servers abgefragt werden:
# systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; ve>
Active: active (running) since Sat 2022-01-29 08:55:27 CET; 40s ago
Docs: http://www.elastic.co
Process: 67812 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-keys>
Main PID: 67884 (java)
Tasks: 68 (limit: 2341)
Memory: 1.4G
CPU: 34.122s
CGroup: /system.slice/elasticsearch.service
├─67884 /usr/lib/jvm/default-runtime/bin/java -Xshare:auto -Des.ne>
└─68058 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux->
Jan 29 08:55:37 server elasticsearch[67884]: [2022-01-29T08:55:37,566][INFO ][o>
Jan 29 08:55:37 server elasticsearch[67884]: [2022-01-29T08:55:37,567][INFO ][o>
Jan 29 08:55:37 server elasticsearch[67884]: [2022-01-29T08:55:37,738][INFO ][o>
Jan 29 08:55:37 server elasticsearch[67884]: [2022-01-29T08:55:37,745][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,086][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,282][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,474][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,643][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,696][INFO ][o>
Jan 29 08:55:38 server elasticsearch[67884]: [2022-01-29T08:55:38,737][INFO ][o>
Elasticsearch: Test
bis Version 5.x.x !:!
Ein Verbindungstest kann durchgeführt werden, in dem Elasticsearch über den Kommunikation-URL und -Port http://localhost:9200 entsprechend aufgerufen wird, was mit nachfolgendem Befehl durchgeführt werden kann:
Nachfolgender Befehl führt eine interaktive Passwortabfrage für die Benutzer:
elastic
apm_system
kibana_system
logstash_system
beats_system
remote_monitoring_user
durch:
# elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users
elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
HINWEIS - Ein Zugriff auf Elasticsearchohne Benutzername und dazugehörigen Passwort, sollten danach nicht mehr möglich sein!
Ein Verbindungstest kann jetzt nur noch in Verbindung mit einem Benutzernamen und dem dazugehörigen Passwort durchgeführt werden, in dem Elasticsearch über den Kommunikation-URL und -Port http://localhost:9200 entsprechend aufgerufen wird, was mit nachfolgendem Befehl durchgeführt werden kann:
# pacman --noconfirm -S opensearch
resolving dependencies...
looking for conflicting packages...
Package (1) Version Net Change Download Size
extra/opensearch 2.13.0-1 0.00 MiB 78.75 MiB
Total Download Size: 78.75 MiB
Total Installed Size: 131.22 MiB
Net Upgrade Size: 0.00 MiB
:: Proceed with installation? [Y/n]
:: Retrieving packages...
opensearch-2.13.... 78.7 MiB 12.7 MiB/s 00:06 [######################] 100%
(1/1) checking keys in keyring [######################] 100%
(1/1) checking package integrity [######################] 100%
(1/1) loading package files [######################] 100%
(1/1) checking for file conflicts [######################] 100%
(1/1) checking available disk space [######################] 100%
:: Running pre-transaction hooks...
(1/1) Performing snapper pre snapshots for the following configurations...
==> root: 11
:: Processing package changes...
(1/1) installing opensearch [######################] 100%
:: Running post-transaction hooks...
(1/7) Creating system user accounts...
(2/7) Reloading system manager configuration...
(3/7) Applying kernel sysctl settings...
(4/7) Creating temporary files...
(5/7) Arming ConditionNeedsUpdate...
(6/7) Check if daemons need restart after library/binary upgrades
Running kernel seems to be up-to-date.
Failed to check for processor microcode upgrades.
Services to be restarted:
systemctl restart opensearch.service
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
(7/7) Performing snapper post snapshots for the following configurations...
==> root: 12
Mit nachfolgendem Befehl kann überprüft werden, welche Inhalte mit den Paket opensearch installiert wurden.
Um das Suchdienst-System Opensearch, welches als Dienst/Deamon als Hintergrundprozess läuft, auch nach einem Neustart des Servers zur Verfügung zu haben, soll der Dienst/Daemon mit dem Server mit gestartet werden, was mit nachfolgendem Befehl realisiert werden kann:
# systemctl enable opensearch.service
Created symlink /etc/systemd/system/multi-user.target.wants/opensearch.service → /usr/lib/systemd/system/opensearch.service.
Eine Überprüfung, ob beim Neustart des Server der opensearch-Dienst/Deamon wirklich mit gestartet wird, kann mit nachfolgendem Befehl erfolgen und sollte eine Anzeige, wie ebenfalls nachfolgend dargestellt ausgeben:
Bevor der Dienst/Daemon von Opensearch gestartet werden kann, ist nachfolgende Konfiguration in der Konfigurationsdatei
/etc/opensearch/opensearch.yml
durchzuführen:
(Komplette Konfigurationsdatei):
# ======================== OpenSearch Configuration =========================## NOTE: OpenSearch comes with reasonable defaults for most settings.# Before you set out to tweak and tune the configuration, make sure you# understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.opensearch.org## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:## Tachtler# default: #cluster.name: my-application
cluster.name: graylog
## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:##node.name: node-1## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):## Tachtler# default: #path.data: /path/to/data
path.data: /var/lib/opensearch
## Path to log files:## Tachtler# default: #path.logs: /path/to/logs
path.logs: /var/log/opensearch
## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## OpenSearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):##network.host: 192.168.0.1## Set a custom port for HTTP:##http.port: 9200## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when this node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.seed_hosts: ["host1", "host2"]## Bootstrap the cluster using an initial set of cluster-manager-eligible nodes:##cluster.initial_cluster_manager_nodes: ["node-1", "node-2"]## For more information, consult the discovery and cluster formation module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:##gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true## ---------------------------------- Remote Store -----------------------------------# Controls whether cluster imposes index creation only with remote store enabled# cluster.remote_store.enabled: true## Repository to use for segment upload while enforcing remote store for an index# node.attr.remote_store.segment.repository: my-repo-1## Repository to use for translog upload while enforcing remote store for an index# node.attr.remote_store.translog.repository: my-repo-1## ---------------------------------- Experimental Features -----------------------------------# Gates the visibility of the experimental segment replication features until they are production ready.##opensearch.experimental.feature.segment_replication_experimental.enabled: false## Gates the functionality of a new parameter to the snapshot restore API# that allows for creation of a new index type that searches a snapshot# directly in a remote repository without restoring all index data to disk# ahead of time.##opensearch.experimental.feature.searchable_snapshot.enabled: false### Gates the functionality of enabling extensions to work with OpenSearch.# This feature enables applications to extend features of OpenSearch outside of# the core.##opensearch.experimental.feature.extensions.enabled: false### Gates the optimization of datetime formatters caching along with change in default datetime formatter# Once there is no observed impact on performance, this feature flag can be removed.##opensearch.experimental.optimization.datetime_formatter_caching.enabled: false## Gates the functionality of enabling Opensearch to use pluggable caches with respective store names via setting.##opensearch.experimental.feature.pluggable.caching.enabled: false
Nachfolgende Änderungen wurden durchgeführt:
cluster.name: graylog
Setzen des cluster-Namen für den Zugriff durch Graylog.
HINWEIS - Dies ist grundsätzlich die einzige relevante Änderung
path.data: /var/lib/opensearch
Setzen des Pfades für die Speicherung der node Daten.
path.logs: /var/log/opensearch
Setzen des Pfades für die Speicherung der LOG-Dateien.
Opensearch: Erster Start
Danach kann der opensearch-Server mit nachfolgendem Befehle gestartet werden:
# systemctl start opensearch.service
* Dies kann ein wenig dauern!
Mit nachfolgendem Befehl kann der Status des Opensearch-Servers abgefragt werden:
● opensearch.service - OpenSearch
Loaded: loaded (/usr/lib/systemd/system/opensearch.service; enabled; prese>
Active: active (running) since Mon 2024-05-20 07:08:34 CEST; 29min ago
Docs: https://opensearch.org/docs/opensearch/index/
Process: 62334 ExecStartPre=/usr/share/opensearch/bin/opensearch-keystore u>
Main PID: 62388 (java)
Tasks: 46 (limit: 4653)
Memory: 1.3G (peak: 1.3G)
CPU: 1min 29.141s
CGroup: /system.slice/opensearch.service
└─62388 /usr/lib/jvm/java-11-openjdk/bin/java -Xshare:auto -Dopens>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,092][INFO ][o.o.>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,141][INFO ][o.o.>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,162][INFO ][o.o.>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,184][INFO ][o.o.>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,445][INFO ][o.o.>
May 20 07:08:36 server opensearch[62388]: [2024-05-20T07:08:36,501][INFO ][o.o.>
May 20 07:10:08 server opensearch[62388]: [2024-05-20T07:10:08,656][INFO ][o.o.>
May 20 07:10:08 server opensearch[62388]: [2024-05-20T07:10:08,686][INFO ][o.o.>
May 20 07:10:08 server opensearch[62388]: [2024-05-20T07:10:08,756][INFO ][o.o.>
May 20 07:10:08 server opensearch[62388]: [2024-05-20T07:10:08,771][INFO ][o.o.>
Opensearch: Test
Ein Verbindungstest kann durchgeführt werden, in dem Opensearch über den Kommunikation-URL und -Port http://localhost:9200 entsprechend aufgerufen wird, was mit nachfolgendem Befehl durchgeführt werden kann:
# pacman -Qil graylog
Name : graylog
Version : 4.2.2-1
Description : Graylog is an open source syslog implementation that stores
your logs in ElasticSearch and MongoDB
Architecture : any
URL : https://www.graylog.org/
Licenses : SSPL
Groups : None
Provides : None
Depends On : java-runtime-headless=11
Optional Deps : elasticsearch [installed]
mongodb [installed]
Required By : None
Optional For : None
Conflicts With : None
Replaces : None
Installed Size : 207.70 MiB
Packager : Unknown Packager
Build Date : Sat 29 Jan 2022 09:03:51 AM CET
Install Date : Sat 29 Jan 2022 09:04:12 AM CET
Install Reason : Explicitly installed
Install Script : Yes
Validated By : None
graylog /etc/
graylog /etc/graylog/
graylog /etc/graylog/server/
graylog /etc/graylog/server/server.conf
graylog /usr/
graylog /usr/lib/
graylog /usr/lib/graylog/
graylog /usr/lib/graylog/plugin/
graylog /usr/lib/graylog/plugin/graylog-plugin-aws-4.2.2.jar
graylog /usr/lib/graylog/plugin/graylog-plugin-collector-4.2.2.jar
graylog /usr/lib/graylog/plugin/graylog-plugin-threatintel-4.2.2.jar
graylog /usr/lib/graylog/plugin/graylog-storage-elasticsearch6-4.2.2.jar
graylog /usr/lib/graylog/plugin/graylog-storage-elasticsearch7-4.2.2.jar
graylog /usr/lib/graylog/server.jar
graylog /usr/lib/systemd/
graylog /usr/lib/systemd/system/
graylog /usr/lib/systemd/system/graylog.service
graylog /usr/lib/tmpfiles.d/
graylog /usr/lib/tmpfiles.d/graylog-server.conf
graylog /usr/share/
graylog /usr/share/doc/
graylog /usr/share/doc/graylog/
graylog /usr/share/doc/graylog/LICENSE
graylog /usr/share/doc/graylog/README.markdown
Graylog: Dienst/Deamon-Start einrichten
Um Graylog, welches als Dienst/Deamon als Hintergrundprozess läuft, auch nach einem Neustart des Servers zur Verfügung zu haben, soll der Dienst/Daemon mit dem Server mit gestartet werden, was mit nachfolgendem Befehl realisiert werden kann:
# systemctl enable graylog.service
Created symlink /etc/systemd/system/multi-user.target.wants/graylog.service → /usr/lib/systemd/system/graylog.service.
Eine Überprüfung, ob beim Neustart des Server der graylog-Dienst/Deamon wirklich mit gestartet wird, kann mit nachfolgendem Befehl erfolgen und sollte eine Anzeige, wie ebenfalls nachfolgend dargestellt ausgeben:
Nachfolgende Konfigurationsdatei ist die Hauptkonfigurationsdatei des Graylog-Servers:
/etc/graylog/server/server.conf
Nachfolgende Anpassungen sind erforderlich, damit der Graylog-Server lauffähig ist:
(Komplette Konfigurationsdatei): - HINWEIS - Aktuelle Version 6.0.1
############################# GRAYLOG CONFIGURATION FILE############################## This is the Graylog configuration file. The file has to use ISO 8859-1/Latin-1 character encoding.# Characters that cannot be directly represented in this encoding can be written using Unicode escapes# as defined in https://docs.oracle.com/javase/specs/jls/se8/html/jls-3.html#jls-3.3, using the \u prefix.# For example, \u002c.## * Entries are generally expected to be a single line of the form, one of the following:## propertyName=propertyValue# propertyName:propertyValue## * White space that appears between the property name and property value is ignored,# so the following are equivalent:## name=Stephen# name = Stephen## * White space at the beginning of the line is also ignored.## * Lines that start with the comment characters ! or # are ignored. Blank lines are also ignored.## * The property value is generally terminated by the end of the line. White space following the# property value is not ignored, and is treated as part of the property value.## * A property value can span several lines if each line is terminated by a backslash (‘\’) character.# For example:## targetCities=\# Detroit,\# Chicago,\# Los Angeles## This is equivalent to targetCities=Detroit,Chicago,Los Angeles (white space at the beginning of lines is ignored).## * The characters newline, carriage return, and tab can be inserted with characters \n, \r, and \t, respectively.## * The backslash character must be escaped as a double backslash. For example:## path=c:\\docs\\doc1## If you are running more than one instances of Graylog server you have to select one of these# instances as leader. The leader will perform some periodical tasks that non-leaders won't perform.
is_leader = true# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /etc/graylog/server/node-id
# You MUST set a secret to secure/pepper the stored user passwords here. Use at least 64 characters.# Generate one by using for example: pwgen -N 1 -s 96# ATTENTION: This value must be the same on all Graylog nodes in the cluster.# Changing this value after installation will render all user sessions and encrypted values in the database invalid. (e.g. encrypted access tokens)# Tachtler# default: password_secret =
password_secret = ndyRpQps9TVbramSXqVTZp42lzS9OUvy2Fn5lVxTmrxnwTR2OR1j94SYoIT2OMNxksq4OJO7hasBqFVU5U9TpkrEgccxoeWc
# The default root user is named 'admin'# Tachtler# default: #root_username = admin
root_username = administrator
# You MUST specify a hash password for the root user (which you only need to initially set up the# system and in case you lose connectivity to your authentication backend)# This password cannot be changed using the API or via the web interface. If you need to change it,# modify it in this file.# Create one by using for example: echo -n yourpassword | shasum -a 256# and put the resulting hash value into the following line# Tachtler# default: root_password_sha2 =
root_password_sha2 = addb0f5e7826c857d7376d1bd9bc33c0c544790a2eac96144a8af22b1298c940
# The email address of the root user.# Default is empty# Tachtler# default: #root_email = ""
root_email = graylog@tachtler.net
# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.# Default is UTC# Tachtler# default: #root_timezone = UTC
root_timezone = Europe/Berlin
# Set the bin directory here (relative or absolute)# This directory contains binaries that are used by the Graylog server.# Default: bin
bin_dir = bin
# Set the data directory here (relative or absolute)# This directory is used to store Graylog server state.# Tachtler# default: #data_dir = data
data_dir = /var/lib/graylog/data
# Set plugin directory here (relative or absolute)
plugin_dir = /usr/lib/graylog/plugin
################ HTTP settings################### HTTP bind address## The network interface used by the Graylog HTTP interface.## This network interface must be accessible by all Graylog nodes in the cluster and by all clients# using the Graylog web interface.## If the port is omitted, Graylog will use port 9000 by default.## Default: 127.0.0.1:9000#http_bind_address = 127.0.0.1:9000#http_bind_address = [2001:db8::1]:9000#### HTTP publish URI## The HTTP URI of this Graylog node which is used to communicate with the other Graylog nodes in the cluster and by all# clients using the Graylog web interface.## The URI will be published in the cluster discovery APIs, so that other Graylog nodes will be able to find and connect to this Graylog node.## This configuration setting has to be used if this Graylog node is available on another network interface than $http_bind_address,# for example if the machine has multiple network interfaces or is behind a NAT gateway.## If $http_bind_address contains a wildcard IPv4 address (0.0.0.0), the first non-loopback IPv4 address of this machine will be used.# This configuration setting *must not* contain a wildcard address!## Default: http://$http_bind_address/#http_publish_uri = http://192.168.1.1:9000/#### External Graylog URI## The public URI of Graylog which will be used by the Graylog web interface to communicate with the Graylog REST API.## The external Graylog URI usually has to be specified, if Graylog is running behind a reverse proxy or load-balancer# and it will be used to generate URLs addressing entities in the Graylog REST API (see $http_bind_address).## When using Graylog Collector, this URI will be used to receive heartbeat messages and must be accessible for all collectors.## This setting can be overridden on a per-request basis with the "X-Graylog-Server-URL" HTTP request header.## Default: $http_publish_uri#http_external_uri =#### Enable CORS headers for HTTP interface## This allows browsers to make Cross-Origin requests from any origin.# This is disabled for security reasons and typically only needed if running graylog# with a separate server for frontend development.## Default: false#http_enable_cors = false#### Enable GZIP support for HTTP interface## This compresses API responses and therefore helps to reduce# overall round trip times. This is enabled by default. Uncomment the next line to disable it.#http_enable_gzip = false# The maximum size of the HTTP request headers in bytes.#http_max_header_size = 8192# The size of the thread pool used exclusively for serving the HTTP interface.#http_thread_pool_size = 64################# HTTPS settings#################### Enable HTTPS support for the HTTP interface## This secures the communication with the HTTP interface with TLS to prevent request forgery and eavesdropping.## Default: false#http_enable_tls = true# The X.509 certificate chain file in PEM format to use for securing the HTTP interface.#http_tls_cert_file = /path/to/graylog.crt# The PKCS#8 private key file in PEM format to use for securing the HTTP interface.#http_tls_key_file = /path/to/graylog.key# The password to unlock the private key used for securing the HTTP interface.#http_tls_key_password = secret# If set to "true", Graylog will periodically investigate indices to figure out which fields are used in which streams.# It will make field list in Graylog interface show only fields used in selected streams, but can decrease system performance,# especially on systems with great number of streams and fields.stream_aware_field_types=false# Comma separated list of trusted proxies that are allowed to set the client address with X-Forwarded-For# header. May be subnets, or hosts.#trusted_proxies = 127.0.0.1/32, 0:0:0:0:0:0:0:1/128# List of Elasticsearch hosts Graylog should connect to.# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that# requires authentication.## Default: http://127.0.0.1:9200# Tachtler# default: #elasticsearch_hosts = http://node1:9200,http://user:password@node2:9200
elasticsearch_hosts = http://127.0.0.1:9200# Maximum number of attempts to connect to elasticsearch on boot for the version probe.## Default: 0, retry indefinitely with the given delay until a connection could be established#elasticsearch_version_probe_attempts = 5# Waiting time in between connection attempts for elasticsearch_version_probe_attempts## Default: 5s#elasticsearch_version_probe_delay = 5s# Maximum amount of time to wait for successful connection to Elasticsearch HTTP port.## Default: 10 Seconds#elasticsearch_connect_timeout = 10s# Maximum amount of time to wait for reading back a response from an Elasticsearch server.# (e. g. during search, index creation, or index time-range calculations)## Default: 60 seconds#elasticsearch_socket_timeout = 60s# Maximum idle time for an Elasticsearch connection. If this is exceeded, this connection will# be tore down.## Default: inf#elasticsearch_idle_timeout = -1s# Maximum number of total connections to Elasticsearch.## Default: 200#elasticsearch_max_total_connections = 200# Maximum number of total connections per Elasticsearch route (normally this means per# elasticsearch server).## Default: 20#elasticsearch_max_total_connections_per_route = 20# Maximum number of times Graylog will retry failed requests to Elasticsearch.## Default: 2#elasticsearch_max_retries = 2# Enable automatic Elasticsearch node discovery through Nodes Info,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster-nodes-info.html## WARNING: Automatic node discovery does not work if Elasticsearch requires authentication, e. g. with Shield.## Default: false#elasticsearch_discovery_enabled = true# Filter for including/excluding Elasticsearch nodes in discovery according to their custom attributes,# see https://www.elastic.co/guide/en/elasticsearch/reference/5.4/cluster.html#cluster-nodes## Default: empty#elasticsearch_discovery_filter = rack:42# Frequency of the Elasticsearch node discovery.## Default: 30s# elasticsearch_discovery_frequency = 30s# Set the default scheme when connecting to Elasticsearch discovered nodes## Default: http (available options: http, https)#elasticsearch_discovery_default_scheme = http# Enable payload compression for Elasticsearch requests.## Default: false#elasticsearch_compression_enabled = true# Enable use of "Expect: 100-continue" Header for Elasticsearch index requests.# If this is disabled, Graylog cannot properly handle HTTP 413 Request Entity Too Large errors.## Default: true#elasticsearch_use_expect_continue = true# Graylog uses Index Sets to manage settings for groups of indices. The default options for index sets are configurable# for each index set in Graylog under System > Configuration > Index Set Defaults.# The following settings are used to initialize in-database defaults on the first Graylog server startup.# Specify these values if you want the Graylog server and indices to start with specific settings.# The prefix for the Default Graylog index set.##elasticsearch_index_prefix = graylog# The name of the index template for the Default Graylog index set.##elasticsearch_template_name = graylog-internal# The prefix for the for graylog event indices.##default_events_index_prefix = gl-events# The prefix for graylog system event indices.##default_system_events_index_prefix = gl-system-events# Analyzer (tokenizer) to use for message and full_message field. The "standard" filter usually is a good idea.# All supported analyzers are: standard, simple, whitespace, stop, keyword, pattern, language, snowball, custom# Elasticsearch documentation: https://www.elastic.co/guide/en/elasticsearch/reference/2.3/analysis.html# Note that this setting only takes effect on newly created indices.##elasticsearch_analyzer = standard# How many Elasticsearch shards and replicas should be used per index?##elasticsearch_shards = 1#elasticsearch_replicas = 0# Maximum number of attempts to connect to datanode on boot.# Default: 0, retry indefinitely with the given delay until a connection could be established#datanode_startup_connection_attempts = 5# Waiting time in between connection attempts for datanode_startup_connection_attempts## Default: 5s# datanode_startup_connection_delay = 5s# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize# cycled indices.##disable_index_optimization = true# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch# on heavily used systems with large indices, but it will decrease search performance. The default is 1.##index_optimization_max_num_segments = 1# Time interval to trigger a full refresh of the index field types for all indexes. This will query ES for all indexes# and populate any missing field type information to the database.##index_field_type_periodical_full_refresh_interval = 5m# You can configure the default strategy used to determine when to rotate the currently active write index.# Multiple rotation strategies are supported, the default being "time-size-optimizing":# - "time-size-optimizing" tries to rotate daily, while focussing on optimal sized shards.# The global default values can be configured with# "time_size_optimizing_retention_min_lifetime" and "time_size_optimizing_retention_max_lifetime".# - "count" of messages per index, use elasticsearch_max_docs_per_index below to configure# - "size" per index, use elasticsearch_max_size_per_index below to configure# - "time" interval between index rotations, use elasticsearch_max_time_per_index to configure# A strategy may be disabled by specifying the optional enabled_index_rotation_strategies list and excluding that strategy.##enabled_index_rotation_strategies = count,size,time,time-size-optimizing# The default index rotation strategy to use.# Tachtler# default: #rotation_strategy = time-size-optimizing
rotation_strategy = time# (Approximate) maximum number of documents in an Elasticsearch index before a new index# is being created, also see no_retention and elasticsearch_max_number_of_indices.# Configure this if you used 'rotation_strategy = count' above.##elasticsearch_max_docs_per_index = 20000000# (Approximate) maximum size in bytes per Elasticsearch index on disk before a new index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 30GB.# Configure this if you used 'rotation_strategy = size' above.##elasticsearch_max_size_per_index = 32212254720# (Approximate) maximum time before a new Elasticsearch index is being created, also see# no_retention and elasticsearch_max_number_of_indices. Default is 1 day.# Configure this if you used 'rotation_strategy = time' above.# Please note that this rotation period does not look at the time specified in the received messages, but is# using the real clock value to decide when to rotate the index!# Specify the time using a duration and a suffix indicating which unit you want:# 1w = 1 week# 1d = 1 day# 12h = 12 hours# Permitted suffixes are: d for day, h for hour, m for minute, s for second.##elasticsearch_max_time_per_index = 1d# Controls whether empty indices are rotated. Only applies to the "time" rotation_strategy.##elasticsearch_rotate_empty_index_set=false# Provides a hard upper limit for the retention period of any index set at configuration time.## This setting is used to validate the value a user chooses for the maximum number of retained indexes, when configuring# an index set. However, it is only in effect, when a time-based rotation strategy is chosen.## If a rotation strategy other than time-based is selected and/or no value is provided for this setting, no upper limit# for index retention will be enforced. This is also the default.# Default: none#max_index_retention_period = P90d# Optional upper bound on elasticsearch_max_time_per_index##elasticsearch_max_write_index_age = 1d# Disable message retention on this node, i. e. disable Elasticsearch index rotation.#no_retention = false# Decide what happens with the oldest indices when the maximum number of indices is reached.# The following strategies are available:# - delete # Deletes the index completely (Default)# - close # Closes the index and hides it from the system. Can be re-opened later.##retention_strategy = delete# This configuration list limits the retention strategies available for user configuration via the UI# The following strategies can be disabled:# - delete # Deletes the index completely (Default)# - close # Closes the index and hides it from the system. Can be re-opened later.# - none # No operation is performed. The index stays open. (Not recommended)# WARNING: At least one strategy must be enabled. Be careful when extending this list on existing installations!
disabled_retention_strategies = none,close
# How many indices do you want to keep for the delete and close retention types?## Tachtler# default: #elasticsearch_max_number_of_indices = 20
elasticsearch_max_number_of_indices = 31# Disable checking the version of Elasticsearch for being compatible with this Graylog release.# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!##elasticsearch_disable_version_check = true# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only# be enabled with care. See also: https://docs.graylog.org/docs/query-language
allow_leading_wildcard_searches = false# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false# Sets field value suggestion mode. The possible values are:# 1. "off" - field value suggestions are turned off# 2. "textual_only" - field values are suggested only for textual fields# 3. "on" (default) - field values are suggested for all field types, even the types where suggestions are inefficient performance-wise
field_value_suggestion_mode = on
# Global timeout for index optimization (force merge) requests.# Default: 1h#elasticsearch_index_optimization_timeout = 1h# Maximum number of concurrently running index optimization (force merge) jobs.# If you are using lots of different index sets, you might want to increase that number.# This value should be set lower than elasticsearch_max_total_connections_per_route, otherwise index optimization# could deplete all the client connections to the search server and block new messages ingestion for prolonged# periods of time.# Default: 10#elasticsearch_index_optimization_jobs = 10# Mute the logging-output of ES deprecation warnings during REST calls in the ES RestClient#elasticsearch_mute_deprecation_warnings = true# Time interval for index range information cleanups. This setting defines how often stale index range information# is being purged from the database.# Default: 1h#index_ranges_cleanup_interval = 1h# Batch size for the Elasticsearch output. This is the maximum (!) number of messages the Elasticsearch output# module will get at once and write to Elasticsearch in a batch call. If the configured batch size has not been# reached within output_flush_interval seconds, everything that is available will be flushed at once. Remember# that every outputbuffer processor manages its own batch and performs its own batch write calls.# ("outputbuffer_processors" variable)
output_batch_size = 500# Flush interval (in seconds) for the Elasticsearch output. This is the maximum amount of time between two# batches of messages written to Elasticsearch. It is only effective at all if your minimum number of messages# for this time period is less than output_batch_size * outputbuffer_processors.
output_flush_interval = 1# As stream outputs are loaded only on demand, an output which is failing to initialize will be tried over and# over again. To prevent this, the following configuration options define after how many faults an output will# not be tried again for an also configurable amount of seconds.
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30# Number of process buffer processors running in parallel.# By default, the value will be determined automatically based on the number of CPU cores available to the JVM, using# the formula (<#cores> * 0.36 + 0.625) rounded to the nearest integer.# Set this value explicitly to override the dynamically calculated value. Try raising the number if your buffers are# filling up.#processbuffer_processors = 5# Number of output buffer processors running in parallel.# By default, the value will be determined automatically based on the number of CPU cores available to the JVM, using# the formula (<#cores> * 0.162 + 0.625) rounded to the nearest integer.# Set this value explicitly to override the dynamically calculated value. Try raising the number if your buffers are# filling up.#outputbuffer_processors = 3# The size of the thread pool in the output buffer processor.# Default: 3#outputbuffer_processor_threads_core_pool_size = 3# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).#udp_recvbuffer_sizes = 1048576# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)# Possible types:# - yielding# Compromise between performance and CPU usage.# - sleeping# Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.# - blocking# High throughput, low latency, higher CPU usage.# - busy_spinning# Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking
# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_wait_strategy = blocking
# Number of input buffer processors running in parallel.#inputbuffer_processors = 2# Manually stopped inputs are no longer auto-restarted. To re-enable the previous behavior, set auto_restart_inputs to true.#auto_restart_inputs = true# Enable the message journal.
message_journal_enabled = true# The directory which will be used to store the message journal. The directory must be exclusively used by Graylog and# must not contain any other files than the ones created by Graylog itself.## ATTENTION:# If you create a separate partition for the journal files and use a file system creating directories like 'lost+found'# in the root directory, you need to create a sub directory for your journal.# Otherwise Graylog will log an error message that the journal is corrupt and Graylog will not start.# Default: <data_dir>/journal#message_journal_dir = data/journal# Journal hold messages before they could be written to Elasticsearch.# For a maximum of 12 hours or 5 GB whichever happens first.# During normal operation the journal will be smaller.#message_journal_max_age = 12h#message_journal_max_size = 5gb#message_journal_flush_age = 1m#message_journal_flush_interval = 1000000#message_journal_segment_age = 1h#message_journal_segment_size = 100mb# Number of threads used exclusively for dispatching internal events. Default is 2.#async_eventbus_processors = 2# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is# disabled if not set.#lb_throttle_threshold_percentage = 95# Every message is matched against the configured streams and it can happen that a stream contains rules which# take an unusual amount of time to run, for example if its using regular expressions that perform excessive backtracking.# This will impact the processing of the entire server. To keep such misbehaving stream rules from impacting other# streams, Graylog limits the execution time for each stream.# The default values are noted below, the timeout is in milliseconds.# If the stream matching for one stream took longer than the timeout value, and this happened more than "max_faults" times# that stream is disabled and a notification is shown in the web interface.#stream_processing_timeout = 2000#stream_processing_max_faults = 3# Since 0.21 the Graylog server supports pluggable output modules. This means a single message can be written to multiple# outputs. The next setting defines the timeout for a single output module, including the default output module where all# messages end up.## Time in milliseconds to wait for all message outputs to finish writing a single message.#output_module_timeout = 10000# Time in milliseconds after which a detected stale leader node is being rechecked on startup.#stale_leader_timeout = 2000# Time in milliseconds which Graylog is waiting for all threads to stop on shutdown.#shutdown_timeout = 30000# MongoDB connection string# See https://docs.mongodb.com/manual/reference/connection-string/ for details
mongodb_uri = mongodb://localhost/graylog
# Authenticate against the MongoDB server# '+'-signs in the username or password need to be replaced by '%2B'# Tachtler# default: #mongodb_uri = mongodb://grayloguser:secret@localhost:27017/graylog
mongodb_uri = mongodb://grayloguser:geheim0@127.0.0.1:27017/graylog
# Use a replica set instead of a single host#mongodb_uri = mongodb://grayloguser:secret@localhost:27017,localhost:27018,localhost:27019/graylog?replicaSet=rs01# DNS Seedlist https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format#mongodb_uri = mongodb+srv://server.example.org/graylog# Increase this value according to the maximum connections your MongoDB server can handle from a single client# if you encounter MongoDB connection problems.
mongodb_max_connections = 1000# Maximum number of attempts to connect to MongoDB on boot for the version probe.## Default: 0, retry indefinitely until a connection can be established#mongodb_version_probe_attempts = 5# Email transport#transport_email_enabled = false#transport_email_hostname = mail.example.com#transport_email_port = 587#transport_email_use_auth = true#transport_email_auth_username = you@example.com#transport_email_auth_password = secret#transport_email_from_email = graylog@example.com#transport_email_socket_connection_timeout = 10s#transport_email_socket_timeout = 10s# Encryption settings## ATTENTION:# Using SMTP with STARTTLS *and* SMTPS at the same time is *not* possible.# Use SMTP with STARTTLS, see https://en.wikipedia.org/wiki/Opportunistic_TLS#transport_email_use_tls = true# Use SMTP over SSL (SMTPS), see https://en.wikipedia.org/wiki/SMTPS# This is deprecated on most SMTP services!#transport_email_use_ssl = false# Specify and uncomment this if you want to include links to the stream in your stream alert mails.# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.# Tachtler# default: #transport_email_web_interface_url = https://graylog.example.com
transport_email_web_interface_url = https://graylog.tachtler.net
# The default connect timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 5s#http_connect_timeout = 5s# The default read timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_read_timeout = 10s# The default write timeout for outgoing HTTP connections.# Values must be a positive duration (and between 1 and 2147483647 when converted to milliseconds).# Default: 10s#http_write_timeout = 10s# HTTP proxy for outgoing HTTP connections# ATTENTION: If you configure a proxy, make sure to also configure the "http_non_proxy_hosts" option so internal# HTTP connections with other nodes does not go through the proxy.# Examples:# - http://proxy.example.com:8123# - http://username:password@proxy.example.com:8123#http_proxy_uri =# A list of hosts that should be reached directly, bypassing the configured proxy server.# This is a list of patterns separated by ",". The patterns may start or end with a "*" for wildcards.# Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.# Examples:# - localhost,127.0.0.1# - 10.0.*,*.example.com#http_non_proxy_hosts =# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.#ldap_connection_timeout = 2000# Disable the use of a native system stats collector (currently OSHI)#disable_native_system_stats_collector = false# The default cache time for dashboard widgets. (Default: 10 seconds, minimum: 1 second)#dashboard_widget_default_cache_time = 10s# For some cluster-related REST requests, the node must query all other nodes in the cluster. This is the maximum number# of threads available for this. Increase it, if '/cluster/*' requests take long to complete.# Should be http_thread_pool_size * average_cluster_size if you have a high number of concurrent users.#proxied_requests_thread_pool_size = 64# The default HTTP call timeout for cluster-related REST requests. This timeout might be overriden for some# resources in code or other configuration values. (some cluster metrics resources use a lower timeout)#proxied_requests_default_call_timeout = 5s# The server is writing processing status information to the database on a regular basis. This setting controls how# often the data is written to the database.# Default: 1s (cannot be less than 1s)#processing_status_persist_interval = 1s# Configures the threshold for detecting outdated processing status records. Any records that haven't been updated# in the configured threshold will be ignored.# Default: 1m (one minute)#processing_status_update_threshold = 1m# Configures the journal write rate threshold for selecting processing status records. Any records that have a lower# one minute rate than the configured value might be ignored. (dependent on number of messages in the journal)# Default: 1#processing_status_journal_write_rate_threshold = 1# Automatically load content packs in "content_packs_dir" on the first start of Graylog.#content_packs_loader_enabled = false# The directory which contains content packs which should be loaded on the first start of Graylog.# Default: <data_dir>/contentpacks#content_packs_dir = data/contentpacks# A comma-separated list of content packs (files in "content_packs_dir") which should be applied on# the first start of Graylog.# Default: empty#content_packs_auto_install = grok-patterns.json# The allowed TLS protocols for system wide TLS enabled servers. (e.g. message inputs, http interface)# Setting this to an empty value, leaves it up to system libraries and the used JDK to chose a default.# Default: TLSv1.2,TLSv1.3 (might be automatically adjusted to protocols supported by the JDK)#enabled_tls_protocols = TLSv1.2,TLSv1.3# Enable Prometheus exporter HTTP server.# Default: false#prometheus_exporter_enabled = false# IP address and port for the Prometheus exporter HTTP server.# Default: 127.0.0.1:9833#prometheus_exporter_bind_address = 127.0.0.1:9833# Path to the Prometheus exporter core mapping file. If this option is enabled, the full built-in core mapping is# replaced with the mappings in this file.# This file is monitored for changes and updates will be applied at runtime.# Default: none#prometheus_exporter_mapping_file_path_core = prometheus-exporter-mapping-core.yml# Path to the Prometheus exporter custom mapping file. If this option is enabled, the mappings in this file are# configured in addition to the built-in core mappings. The mappings in this file cannot overwrite any core mappings.# This file is monitored for changes and updates will be applied at runtime.# Default: none#prometheus_exporter_mapping_file_path_custom = prometheus-exporter-mapping-custom.yml# Configures the refresh interval for the monitored Prometheus exporter mapping files.# Default: 60s#prometheus_exporter_mapping_file_refresh_interval = 60s# Optional allowed paths for Graylog data files. If provided, certain operations in Graylog will only be permitted# if the data file(s) are located in the specified paths (for example, with the CSV File lookup adapter).# All subdirectories of indicated paths are allowed by default. This Provides an additional layer of security,# and allows administrators to control where in the file system Graylog users can select files from.#allowed_auxiliary_paths = /etc/graylog/data-files,/etc/custom-allowed-path# Do not perform any preflight checks when starting Graylog# Default: false#skip_preflight_checks = false# Ignore any exceptions encountered when running migrations# Use with caution - skipping failing migrations may result in an inconsistent DB state.# Default: false#ignore_migration_failures = false# Comma-separated list of notification types which should not emit a system event.# Default: SIDECAR_STATUS_UNKNOWN which would create a new event whenever the status of a sidecar becomes "Unknown"#system_event_excluded_types = SIDECAR_STATUS_UNKNOWN# RSS settings for content stream#content_stream_rss_url = https://www.graylog.org/post#content_stream_refresh_interval = 7d# Maximum value that can be set for an event limit.# Default: 1000#event_definition_max_event_limit = 1000# Optional limits on scheduling concurrency by job type. No more than the specified number of worker# threads will be executing jobs of the specified type across the entire cluster.# Default: no limitation# Note: Monitor job queue metrics to avoid excessive backlog of unprocessed jobs when using this setting!# Available job types in Graylog Open:# check-for-cert-renewal-execution-v1# event-processor-execution-v1# notification-execution-v1#job_scheduler_concurrency_limits = event-processor-execution-v1:2,notification-execution-v1:2
Der Passwort-Hash dient als Referenz zur Verschlüsselung von Passwörtern der Benutzer. Der Passwort-Hash wurde mit Hilfe des nachfolgenden Befehls erstellt:
E-Mail-Adresse des Benutzers mit den Administratoren Rechten.
root_timezone= Europe/Berlin
Anpassen der Zeitzone für den Benutzer mit Administratoren Rechten. Eine Liste möglicher Einstellungen kann unter nachfolgendem externen Link eingesehen werden:
Verbindungs-URI zum Elasticsearch Such-Server, welche einen Benutzernamen und ein Passwort benötigt - AKTIVIEREN
rotation_strategy= time
Änderung der Rotations-Strategie des Such-Servers Elasticsearch vom Standard count (Anzahl der Einträge) auf time (zeitliche Abgrenzung der Index-Datei).
# elasticsearch_max_docs_per_index = 20000000
HINWEIS - Ab Version 6.0.1 NICHT mehr erforderlich - Deaktivieren der maximalen Anzahl an Dokumenten pro Index-Datei des Such-Servers Elasticsearch, da hier die Rotations-Strategie von Standard count (Anzahl der Einträge) auf time (zeitliche Abgrenzung der Index-Datei) abgeändert wurde!
elasticsearch_max_time_per_index= 1d
Aktivieren des maximalen Zeitintervalls der Index-Datei des Such-Servers Elasticsearch (hier 1 Tag), da hier die Rotations-Strategie von Standard count (Anzahl der Einträge) auf time (zeitliche Abgrenzung der Index-Datei) abgeändert wurde!
elasticsearch_max_number_of_indices= 31
Maximale Anzahl der Index-Dateien des Such-Servers Elasticsearch (hier 31 Tage pro Index-Datei = 31 Index-Dateien). Da hier die Rotations-Strategie von Standard count (Anzahl der Einträge) auf time (zeitliche Abgrenzung der Index-Datei) abgeändert wurde!
elasticsearch_shards= 1
HINWEIS - Ab Version 6.0.1 NICHT mehr erforderlich - Anzahl der shards, welchen der Anzahl der Elasticsearch Such-Server entsprechen sollte.
# mongodb_uri = mongodb://localhost/graylog
Verbindungs-URI zum MongoDB Datenbank-Server, welche keinen Benutzernamen und kein Passwort benötigt - DEAKTIVIEREN!
Damit beim ersten Start des Graylog-Servers die Konfigurationsdatei /etc/graylog/server/node.id erstellt werden kann, muss der Benutzer mit dem Graylog gestartet wird, Besitzer des Verzeichnisses /etc/graylog/server sein, was mit nachfolgendem Befehl durchgeführt werden kann:
# chown graylog:graylog /etc/graylog/server
Graylog: Erster Start
Danach kann der graylog-Server mit nachfolgendem Befehle gestartet werden:
# systemctl start graylog.service
Mit nachfolgendem Befehl kann der Status des Graylog-Servers abgefragt werden:
# systemctl status graylog.service
● graylog.service - Graylog management server
Loaded: loaded (/usr/lib/systemd/system/graylog.service; enabled; vendor p>
Active: active (running) since Sat 2022-01-29 10:59:44 CET; 304ms ago
Main PID: 75499 (java)
Tasks: 18 (limit: 2341)
Memory: 39.7M
CPU: 424ms
CGroup: /system.slice/graylog.service
└─75499 /usr/bin/java -Djava.net.preferIPv6Addresses=true -Djava.l>
Jan 29 10:59:44 server systemd[1]: Started Graylog management server.
sollte nachfolgende Ausgabe im Browser erscheinen:
Durch Eingabe der in der Konfigurationsdatei
/etc/graylog/server/server.conf
festgelegten Zugangsdaten, kann nun die Anmeldung erfolgen:
Benutzername:
administrator
Passwort:
geheim
Nach einer erfolgreichen Anmeldung, sollte in etwa ein Bildschirminhalt wie nachfolgend dargestellt, zu sehen sein:
Graylog TLS-Zertifikat erstellen
Um entsprechenden Inputs des Graylog-Server nicht nur unverschlüsselte, sondern auch via TLS/StartTLS-Verschlüsselung ansprechen zu können, muss zuerst ein Zertifikat erzeugt werden. Dies kann durch eine offizielle Zertifizierungsstelle durchgeführt werden, was jedoch natürlich mit Kosten verbunden ist, oder es kommt ein sogenanntes Self-Signed-Certificate (Selbst erstelltes/unterschriebenes) Zertifikat zum Einsatz.
Um die Verschlüsselung einsetzen zu können, sind folgende Komponenten erforderlich:
eine eigen Certificate Authority (CA), welche Self-Signed-Certificate (Selbst erstelltes/unterschriebenes) Zertifikat esrtellen kann
einen CSR (Certificate Request), welcher von einer Certificate Authority (CA) signiert wird
einem private key (privaten Schlüssel), welcher zum CRT (Certificate) gehört und zum Einsatz eines CRT (Certificate) benötigt wird
das CRT (Certificate) selbst, welcher von der Certificate Authority (CA) ausgestellt wird
Zur Erstellung eines Self-Signed-Certificate und zur Erstellung der oben genannten Komponenten, wird das Paket
openssl
benötigt, welches i.d.R. bereits installiert sein sollte.
Abschliessend soll mit nachfolgendem Befehl in das Verzeichnis /etc/ssl gewechselt werden:
# cd /etc/ssl
/etc/ssl/openssl.cnf
Nachfolgende Anpassungen müssen mindestens in der Konfigurationsdatei /etc/ssl/openssl.cnf durchgeführt werden, um SAN (Subject Alternative Names) in das Zertifikat mit aufnehmen zu können:
(Nur relevanter Ausschnitt):
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints= CA:FALSEkeyUsage= nonRepudiation, digitalSignature, keyEncipherment
# Tachtler - NEW -
subjectAltName= @alt_names
# Tachtler - NEW -
[ alt_names ]
DNS.1 = localhost
DNS.2 = 127.0.0.1
DNS.3 = ::1
Zur Erstellung einer eigene Certificate Authority (CA) kann ein Script, welches bei der Installation von openssl mitgeliefert wird und sich im Verzeichnis /etc/ssl/misc/ befindet, genutzt werden. Der Name des Scripts lautet
/etc/ssl/misc/CA.pl.
HINWEIS - Um Subject Alternative Names hinzufügen zu können, muss nachfolgende zweite Ergänzung abgeändert werden, das sonst die -extensions v3_reqNICHT angezogen wird!
HINWEIS - Die zu erstellende Certificate Authority (CA) hat standardmässig eine Laufzeit von 3 Jahren !!!
Falls eine längere Laufzeit als drei Jahre gewünscht sein soll, kann nachfolgender Parameter im Skript, in nachfolgendem Verzeichnis, mit nachfolgendem Namen
WICHTIG - Die Laufzeit der Certificate Authority (CA) muss länger als die Laufzeit des Zertifikates sein !!!
Folgender Aufruf erstellt eine eigene Certificate Authority (CA):
HINWEIS - Nicht benötigte Angaben werden mit Eingabe eines Punktes [.] übersprungen!
# /etc/ssl/misc/CA.pl -newca
# /etc/ssl/misc/CA.pl -newca
CA certificate filename (or enter to create)
Making CA certificate ...====
openssl req -new-keyout /etc/ssl/private/cakey.pem -out /etc/ssl/careq.pem
Generating a RSA private key
.............................+++++......+++++
writing new private key to '/etc/ssl/private/cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.-----
Country Name (2 letter code)[AU]:DE
State or Province Name (full name)[Some-State]:Bavaria (Bayern)
Locality Name (eg, city)[]:Munich (Muenchen)
Organization Name (eg, company)[Internet Widgits Pty Ltd]:tachtler.net
Organizational Unit Name (eg, section)[]:.
Common Name (e.g. server FQDN or YOUR name)[]:www.tachtler.net
Email Address []:hostmaster@tachtler.net
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:.==>0========
openssl ca -create_serial -out /etc/ssl/cacert.pem -days 28024-batch -keyfile /etc/ssl/private/cakey.pem -selfsign -extensions v3_ca -infiles /etc/ssl/careq.pem
Using configuration from /etc/ssl/openssl.cnf
Enter pass phrase for/etc/ssl/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number:58:b7:f2:c0:f9:30:8d:48:4f:77:d3:56:3d:9d:a9:98:19:e6:a2:49
Validity
Not Before: Apr 1010:48:572023 GMT
Not After : Dec 3110:48:572099 GMT
Subject:
countryName = DE
stateOrProvinceName = Bavaria (Bayern)
organizationName = tachtler.net
commonName = www.tachtler.net
emailAddress = hostmaster@tachtler.net
X509v3 extensions:
X509v3 Subject Key Identifier:
07:39:3F:8F:38:B5:EA:69:3E:FA:BC:C4:AB:7C:30:18:93:26:B8:77
X509v3 Authority Key Identifier:
keyid:07:39:3F:8F:38:B5:EA:69:3E:FA:BC:C4:AB:7C:30:18:93:26:B8:77
X509v3 Basic Constraints: critical
CA:TRUE
Certificate is to be certified until Dec 3110:48:572099 GMT (28403 days)
Write out database with 1new entries
Data Base Updated
==>0====
CA certificate is in /etc/ssl/cacert.pem
Durch ausführen des Skriptes mit nachfolgendem Aufrufparameter /etc/ssl/misc/CA.pl -newca ist eine neue Verzeichnisstruktur und neue Dateien unter
/etc/ssl
entstanden, deren Inhalt mit nachfolgendem Befehl bequem aufgelistet werden kann:
# ls -l /etc/ssl/newcerts
total 8
-rw-r--r-- 1 root root 4611 Apr 10 12:48 58B7F2C0F9308D484F77D3563D9DA99819E6A249.pem
# ls -l /etc/ssl/private
total 4
-rw------- 1 root root 1854 Apr 10 12:48 cakey.pem
Erstellen CSR (Certificate Request)
Ebenfalls mit dem Script, welches schon bei der Erstellung einer eigene Certificate Authority (CA) genutzt wurde und sich unter /etc/ssl/misc befindet und den Namen CA trägt, kann nun dieses auch zur Erstellung von
einem CSR (Certificate Request)
einem private key (privaten Schlüssel)
genutzt werden.
HINWEIS - Das zu erstellende Zertifikat hat standardmässig eine Laufzeit von 1 Jahr !!!
Falls eine längere Laufzeit als ein Jahr gewünscht sein soll, kann nachfolgender Parameter im Skript, in nachfolgendem Verzeichnis, mit nachfolgendem Namen
/etc/ssl/openssl.cnf
angepasst werden:
(Nur relevanter Ausschnitt)
...
# Tachtler
# default: default_days = 365 # how long to certify fordefault_days= 28023 # how long to certify for 10.04.2023 - 30.12.2099
...
HINWEIS - Nicht benötigte Angaben werden mit Eingabe eines Punktes [.] übersprungen!
# /etc/ssl/misc/CA.pl -newreq -extra-req -extensions=v3_req
Use of uninitialized value $1 in concatenation (.)or string at /etc/ssl/misc/CA.pl line 137.====
openssl req -new-keyout newkey.pem -out newreq.pem -days 28023-extensions=v3_req
Ignoring -days;not generating a certificate
Generating a RSA private key
...+++++....................................+++++
writing new private key to 'newkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.-----
Country Name (2 letter code)[AU]:DE
State or Province Name (full name)[Some-State]:Bavaria (Bayern)
Locality Name (eg, city)[]:Munich (Muenchen)
Organization Name (eg, company)[Internet Widgits Pty Ltd]:tachtler.net
Organizational Unit Name (eg, section)[]:.
Common Name (e.g. server FQDN or YOUR name)[]:graylog.idmz.tachtler.net
Email Address []:hostmaster@tachtler.net
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:.==>0====
Request is in newreq.pem, private key is in newkey.pem
Durch ausführen des Scripts mit nachfolgendem Aufrufparameter /etc/ssl/misc/CA.pl -newreq -extra-req -extensions=v3_req sind zwei neue Dateien unter
/etc/ssl
entstanden, welche mit nachfolgendem Befehl aufgelistet werden können:
WICHTIG - Die so entstandene Datei /etc/ssl/newreq.pem enthält den CSR (Certificate Request).
Signieren CSR (Certificate Request)
Um den in obigen Beispiel entstandenen CSR (Certificate Request) nun mit der Certificate Authority (CA) zu unterschreiben und somit ein signiertes CRT (Certificate) zu erzeugen, kann wieder das Script, welches schon bei der Erstellung der Certificate Authority (CA) genutzt wurde und sich unter /etc/ssl/misc befindet und den Namen CA.pl trägt, mit nachfolgendem Befehl genutzt werden:
# /etc/ssl/misc/CA.pl -sign -extra-ca====
openssl ca -policy policy_anything -out newcert.pem -extensions=v3_req -infiles newreq.pem
Using configuration from /etc/ssl/openssl.cnf
Enter pass phrase for/etc/ssl/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
Serial Number:58:b7:f2:c0:f9:30:8d:48:4f:77:d3:56:3d:9d:a9:98:19:e6:a2:4a
Validity
Not Before: Apr 1012:04:222023 GMT
Not After : Dec 3012:04:222099 GMT
Subject:
countryName = DE
stateOrProvinceName = Bavaria (Bayern)
localityName = Munich (Muenchen)
organizationName = tachtler.net
commonName = graylog.idmz.tachtler.net
emailAddress = hostmaster@tachtler.net
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Subject Alternative Name:
DNS:localhost, DNS:127.0.0.1, DNS:::1
Certificate is to be certified until Dec 3012:04:222099 GMT (28402 days)
Sign the certificate?[y/n]:y1 out of 1 certificate requests certified, commit?[y/n]y
Write out database with 1new entries
Data Base Updated
==>0====
Signed certificate is in newcert.pem
Durch ausführen des Skriptes mit nachfolgendem Aufrufparameter /etc/ssl/misc/CA.pl -sign -extra-ca ist eine weitere neue Dateien unter
/etc/ssl
entstanden, welche mit nachfolgendem Befehl aufgelistet werden kann:
WICHTIG - Die so entstandene Datei /etc/ssl/key.pem enthält den private key (privaten Schlüssel)OHNE Passphrase!
Installation Zertifikat
Nach dem ein Zertifikat wie hier: Graylog ArchLinux - TLS-Zertifikat erstellen beschrieben erstellt wurde, müssen die benötigen Komponenten noch an die entsprechenden Stellen im Betriebssystem kopiert werden. Dazu sind nachfolgende Befehle notwendig.
Bevor mit der abschliessenden Konfiguration von Graylog zur Nutzung von HTTPS begonnen werden kann, sind die in den vorhergehenden Schritten erstellten Dateien:
/etc/ssl/key.pem
/etc/ssl/newcert.pem
/etc/ssl/cacert.pem
noch zu kopieren und ggf. umzubenennen und die Besitz- und Dateirechte der entsprechend Dateien noch anzupassen!
Als erstes werden mit den nachfolgenden Befehlen zwei neue Verzeichnisse im bestehen Verzeichnis /etc/kea angelegt:
Anschliessend werden mit den nachfolgenden Befehlen die entsprechenden Dateien an den jeweiligen Bestimmungsort kopiert und ggf. umbenannt:
# cp -a /etc/ssl/key.pem /etc/graylog/server/ssl/private/graylog.idmz.tachtler.net.key
# cp -a /etc/ssl/newcert.pem /etc/graylog/server/ssl/certs/graylog.idmz.tachtler.net.pem
# cp -a /etc/ssl/cacert.pem /etc/graylog/server/ssl/certs/CAcert.pem
Die Besitz- und Dateirechte der soeben kopieren und ggf. umbenannten Dateien
Diese Website verwendet Cookies. Durch die Nutzung der Website stimmen Sie dem Speichern von Cookies auf Ihrem Computer zu. Außerdem bestätigen Sie, dass Sie unsere Datenschutzbestimmungen gelesen und verstanden haben. Wenn Sie nicht einverstanden sind, verlassen Sie die Website.Weitere Information
tachtler/graylog_archlinux.txt · Zuletzt geändert: 2024/06/30 07:42 von klaus