然后通过curl命令访问一下服务,验证es是否部署成功
部署es服务
部署分析
- es生产环境是部署es集群,通常会使用statefulset进行部署
- es默认使用elasticsearch用户启动进程,es的数据目录是通过宿主机的路径挂载,因此目录权限被主机的目录权限覆盖,因此可以利用initContainer容器在es进程启动之前把目录的权限修改掉,注意init container要用特权模式启动。
- 若希望使用helm部署,参考 https://github.com/helm/charts/tree/master/stable/elasticsearch
使用StatefulSet管理有状态服务
使用Deployment创建多副本的pod的情况:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
使用StatefulSet创建多副本pod的情况:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
labels:
app: nginx-sts
spec:
replicas: 3
serviceName: "nginx"
selector:
matchLabels:
app: nginx-sts
template:
metadata:
labels:
app: nginx-sts
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
无头服务Headless Service
kind: Service
apiVersion: v1
metadata:
name: nginx
namespace: default
spec:
selector:
app: nginx-sts
ports:
- protocol: TCP
port: 80
targetPort: 80
clusterIP: None
$ kubectl -n default exec -ti nginx-statefulset-0 sh
/ # curl nginx-statefulset-2.nginx
部署并验证
es-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: es-config
namespace: logging
data:
elasticsearch.yml: |
cluster.name: "luffy-elasticsearch"
node.name: "${POD_NAME}"
network.host: 0.0.0.0
discovery.seed_hosts: "es-svc-headless"
cluster.initial_master_nodes: "elasticsearch-0,elasticsearch-1,elasticsearch-2"
es-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: es-svc-headless
namespace: logging
labels:
k8s-app: elasticsearch
spec:
selector:
k8s-app: elasticsearch
clusterIP: None
ports:
- name: in
port: 9300
protocol: TCP
es-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: logging
labels:
k8s-app: elasticsearch
spec:
replicas: 3
serviceName: es-svc-headless
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
initContainers:
- command:
- /sbin/sysctl
- -w
- vm.max_map_count=262144
image: alpine:3.6
imagePullPolicy: IfNotPresent
name: elasticsearch-logging-init
resources: {}
securityContext:
privileged: true
- name: fix-permissions
image: alpine:3.6
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: es-data-volume
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: elasticsearch:7.4.2
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 2Gi
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-config-volume
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: es-data-volume
mountPath: /usr/share/elasticsearch/data
volumes:
- name: es-config-volume
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
volumeClaimTemplates:
- metadata:
name: es-data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "nfs"
resources:
requests:
storage: 5Gi
es-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: es-svc
namespace: logging
labels:
k8s-app: elasticsearch
spec:
selector:
k8s-app: elasticsearch
ports:
- name: out
port: 9200
protocol: TCP
$ kubectl create namespace logging
## 部署服务
$ kubectl apply -f es-config.yaml
$ kubectl apply -f es-svc-headless.yaml
$ kubectl apply -f es-statefulset.yaml
$ kubectl apply -f es-svc.yaml
## 等待片刻,查看一下es的pod,状态变为running
$ kubectl -n logging get po -o wide
NAME READY STATUS RESTARTS AGE IP
elasticsearch-0 1/1 Running 0 15m 10.244.0.126
elasticsearch-1 1/1 Running 0 15m 10.244.0.127
elasticsearch-2 1/1 Running 0 15m 10.244.0.128
# 然后通过curl命令访问一下服务,验证es是否部署成功
$ kubectl -n logging get svc
es-svc ClusterIP 10.104.226.175 <none> 9200/TCP 2s
es-svc-headless ClusterIP None \<none\> 9300/TCP 32m
$ curl 10.104.226.175:9200
{
"name" : "elasticsearch-2",
"cluster_name" : "luffy-elasticsearch",
"cluster_uuid" : "7FDIACx9T-2ajYcB5qp4hQ",
"version" : {
"number" : "7.4.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
"build_date" : "2019-10-28T20:40:44.881551Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
部署kibana
部署分析
-
kibana需要暴露web页面给前端使用,因此使用ingress配置域名来实现对kibana的访问
-
kibana为无状态应用,直接使用Deployment来启动
-
kibana需要访问es,直接利用k8s服务发现访问此地址即可,http://es-svc:9200
部署并验证
kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
selector:
matchLabels:
app: "kibana"
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: kibana:7.4.2
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_HOSTS
value: http://es-svc:9200
- name: SERVER_NAME
value: kibana-logging
- name: SERVER_REWRITEBASEPATH
value: "false"
ports:
- containerPort: 5601
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config/
volumes:
- name: config
configMap:
name: kibana-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana-config
namespace: logging
data:
kibana.yml: |-
elasticsearch.requestTimeout: 90000
server.host: "0"
xpack.monitoring.ui.container.elasticsearch.enabled: true
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
type: ClusterIP
selector:
app: kibana
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana
namespace: logging
spec:
ingressClassName: nginx
rules:
- host: kibana.luffy.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana
port:
number: 5601
$ kubectl apply -f kibana.yaml
deployment.apps/kibana created
service/kibana created
ingress/kibana created
## 配置域名解析 kibana.luffy.com,并访问服务进行验证,若可以访问,说明连接es成功
# GET /_cat/health?v
# GET /_cat/indices
Fluentd服务部署
部署分析
- fluentd为日志采集服务,kubernetes集群的每个业务节点都有日志产生,因此需要使用daemonset的模式进行部署
- 为进一步控制资源,会为daemonset指定一个选择标签,fluentd=true来做进一步过滤,只有带有此标签的节点才会部署fluentd
部署服务
注意点:
Docker的日志输出格式与Containerd的日志格式不同,因此需要注意解析日志时要使用新的格式解析器:
https://github.com/fluent/fluentd-kubernetes-daemonset#use-cri-parser-for-containerdcri-o-logs
fluentd-config-tail-container-parse.yaml
$ cat tail_container_parse.conf
# configuration example
<parse>
@type cri
</parse>
# 创建configmap,后面挂载给fluentd使用,适配containerd的日志格式
$ kubectl -n logging create configmap tail-container-parse --from-file=tail_container_parse.conf
fluentd-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: logging
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: logging
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
daemonset定义文件,注意点:
- 需要将/var/log/pods/目录挂载到容器中
- 需要将fluentd的configmap中的配置文件挂载到容器内
- 想要部署fluentd的节点,需要添加fluentd=true的标签
fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
nodeSelector:
fluentd: "true"
serviceAccount: fluentd-es
serviceAccountName: fluentd-es
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch-amd64
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: FLUENT_ELASTICSEARCH_HOST
value: "es-svc"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
- name: FLUENTD_PROMETHEUS_CONF
value: "disable"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
# When actual pod logs in /var/lib/docker/containers, the following lines should be used.
# - name: dockercontainerlogdirectory
# mountPath: /var/lib/docker/containers
# readOnly: true
# When actual pod logs in /var/log/pods, the following lines should be used.
- name: dockercontainerlogdirectory
mountPath: /var/log/pods
readOnly: true
- mountPath: "/fluentd/etc/tail_container_parse.conf"
name: config
subPath: tail-container-parse
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
# When actual pod logs in /var/lib/docker/containers, the following lines should be used.
# - name: dockercontainerlogdirectory
# hostPath:
# path: /var/lib/docker/containers
# When actual pod logs in /var/log/pods, the following lines should be used.
- name: dockercontainerlogdirectory
hostPath:
path: /var/log/pods
- name: config
configMap:
name: tail-container-parse
items:
- key: tail_container_parse.conf
path: tail-container-parse
## 给slave1打上标签,进行部署fluentd日志采集服务
$ kubectl label node k8s-slave1 fluentd=true
$ kubectl label node k8s-slave2 fluentd=true
# 创建服务
$ kubectl apply -f fluentd-rbac.yaml
$ kubectl apply -f fluentd-daemonset.yaml
## 然后查看一下pod是否正常运行
$ kubectl -n logging get po -o wide