写在前面

之前使用Docker Compose创建过ELK服务,但是Elasticsearch是单服务,并没有部署成集群,数据量比较大的情况下单台的负载还是比较高的,所有打算扩展现有集群,或在以后新部署的时候直接部署成集群模式。本文大部分配置基于原有部署文档修改,如有遗漏的配置文件请参考之前的部署文档

修改配置文件

1、首先创建证书

可以找一个对应版本的Elasticsearch包,使用里面的命令进行创建,或者使用之前的容器进行创建,进入程序目录

bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""

创建完成后证书会在程序config目录下,创建完成后放到持久化目录中

2、修改docker-compose.yml文件

version: "3"
services:
  elasticsearch:
    image: elasticsearch:8.1.1
    labels:
      co.elastic.logs/enabled: "false"
    hostname: elasticsearch
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      #- discovery.type=single-node 注释掉单节点启动
      - "ES_JAVA_OPTS=-Xms2g -Xmx2g"
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -k http://localhost:9200",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    volumes:
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./data/elasticsearch:/usr/share/elasticsearch/data
      #增加证书
      - ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12

  kibana:
    image: kibana:8.1.1
    labels:
      co.elastic.logs/enabled: "false"
    hostname: kibana
    ports:
      - "5601:5601"
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl http://localhost:5601",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    depends_on:
      - elasticsearch

  logstash:
    image: logstash:8.1.1
    hostname: logstash
    ports:
      - "5044:5044"
      - "4569:4569"
      - "9600:9600"
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl http://localhost:9600",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
    volumes:
      - ./logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - ./data/logs:/logs
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "LS_OPTS=--config.reload.automatic"
    depends_on:
      elasticsearch: 
        condition: service_healthy

修改elasticsearch.yml

network.host: 0.0.0.0
http.port: 9200
transport.port: 9300
cluster.name: anger-cluster
node.name: node-one
network.publish_host: ["192.168.1.221"]
discovery.seed_hosts: ["192.168.1.221", "192.168.1.223"]
cluster.initial_master_nodes: ["node-one", "node-two"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

解释

#增加集群配置,2台服务器的node.name不可以相同,IP地址也要按实际情况填写
cluster.name: anger-cluster
node.name: node-one
network.publish_host: ["192.168.1.221"]
discovery.seed_hosts: ["192.168.1.221", "192.168.1.223"]
cluster.initial_master_nodes: ["node-one", "node-two"]
#扩展为集群的必要配置,不增加无法启动
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

都修改完成后,先启动原有节点(如果有的话,新部署可以先启动master节点),接着启动从节点即可,观察日志没有错误就已经完成了,不必2个节点都创建Kibana和Logstash,这两个服务只部署一套即可,都启动完成后可以使用命令查看集群状态,如果为新部署集群需要在master节点初始化密码

#查看节点状态
#中间用户名、密码、IP地址替换成真实的
curl --user elastic:k1PiSIIzPAxMPGldPeUT http://192.168.1.241:9200/_cat/nodes?v
#查看集群健康状态
curl --user elastic:k1PiSIIzPAxMPGldPeUT http://192.168.1.241:9200/_cat/health?v

green表示集群状态正常

Elasticsearch集群创建完成后

到这里原单Elasticsearch节点就已经扩展为Elasticsearch双节点集群了,索引数据会由集群进行分片存储分担负载压力,集群内各个节点的账户密码均为同一套,Kibana和Logstash配置文件中地址要增加成2个节点

Kibana中需要修改的配置

Logstash中需要修改的配置

也可以直接下载我自己写的配置文件压缩包,修改地址后使用 下载地址