接着kubernetes集群管理环境 - 完整(上篇)继续往下部署:
八、部署master节点
master节点的kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;kube-apiserver 是无状态的,需要通过 kube-nginx 进行代理访问,从而保证服务可用性;下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
下载最新版本二进制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# wget https://dl.k8s.io/v1.14.2/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 work]# cd kubernetes
[root@k8s-master01 work]# tar -xzvf kubernetes-src.tar.gz 将二进制文件拷贝到所有 master 节点:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_master_ip}:/opt/k8s/bin/
ssh root@${node_master_ip} "chmod x /opt/k8s/bin/*"
done
8.1 - 部署高可用 kube-apiserver 集群
这里部署一个三实例kube-apiserver集群环境,它们通过nginx四层代理进行访问,对外提供一个统一的vip地址,从而保证服务可用性。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
1) 创建 kubernetes 证书和私钥
创建证书签名请求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubernetes-csr.json <{
"cn": "kubernetes",
"hosts": [
"127.0.0.1",
"172.16.60.250",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243",
"${cluster_kubernetes_svc_ip}",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "cn",
"st": "beijing",
"l": "beijing",
"o": "k8s",
"ou": "4paradigm"
}
]
}
eof 解释说明:
• hosts 字段指定授权使用该证书的 ip 或域名列表,这里列出了 vip 、apiserver 节点 ip、kubernetes 服务 ip 和域名;
• 域名最后字符不能是 .(如不能为 kubernetes.default.svc.cluster.local.),否则解析时失败,提示:
x509: cannot parse dnsname "kubernetes.default.svc.cluster.local.";
• 如果使用非 cluster.local 域名,如 opsnull.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.opsnull、kubernetes.default.svc.opsnull.com
• kubernetes 服务 ip 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个ip,后续可以通过如下命令获取: [root@k8s-master01 work]# kubectl get svc kubernetes
the connection to the server 172.16.60.250:8443 was refused - did you specify the right host or port? 上面报错是因为kube-apiserver服务此时没有启动,后续待apiserver服务启动后,以上命令就可以获得了。 生成证书和私钥:
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes [root@k8s-master01 work]# ls kubernetes*pem
kubernetes-key.pem kubernetes.pem 将生成的证书和私钥文件拷贝到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p /etc/kubernetes/cert"
scp kubernetes*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done 2) 创建加密配置文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > encryption-config.yaml <kind: encryptionconfig
apiversion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${encryption_key}
- identity: {}
eof 将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp encryption-config.yaml root@${node_master_ip}:/etc/kubernetes/
done 3) 创建审计策略文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > audit-policy.yaml <apiversion: audit.k8s.io/v1beta1
kind: policy
rules:
# the following requests were manually identified as high-volume and low-risk, so drop them.
- level: none
resources:
- group: ""
resources:
- endpoints
- services
- services/status
users:
- 'system:kube-proxy'
verbs:
- watch - level: none
resources:
- group: ""
resources:
- nodes
- nodes/status
usergroups:
- 'system:nodes'
verbs:
- get - level: none
namespaces:
- kube-system
resources:
- group: ""
resources:
- endpoints
users:
- 'system:kube-controller-manager'
- 'system:kube-scheduler'
- 'system:serviceaccount:kube-system:endpoint-controller'
verbs:
- get
- update - level: none
resources:
- group: ""
resources:
- namespaces
- namespaces/status
- namespaces/finalize
users:
- 'system:apiserver'
verbs:
- get # don't log hpa fetching metrics.
- level: none
resources:
- group: metrics.k8s.io
users:
- 'system:kube-controller-manager'
verbs:
- get
- list # don't log these read-only urls.
- level: none
nonresourceurls:
- '/healthz*'
- /version
- '/swagger*' # don't log events requests.
- level: none
resources:
- group: ""
resources:
- events # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: request
omitstages:
- requestreceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
users:
- kubelet
- 'system:node-problem-detector'
- 'system:serviceaccount:kube-system:node-problem-detector'
verbs:
- update
- patch - level: request
omitstages:
- requestreceived
resources:
- group: ""
resources:
- nodes/status
- pods/status
usergroups:
- 'system:nodes'
verbs:
- update
- patch # deletecollection calls can be large, don't log responses for expected namespace deletions
- level: request
omitstages:
- requestreceived
users:
- 'system:serviceaccount:kube-system:namespace-controller'
verbs:
- deletecollection # secrets, configmaps, and tokenreviews can contain sensitive & binary data,
# so only log at the metadata level.
- level: metadata
omitstages:
- requestreceived
resources:
- group: ""
resources:
- secrets
- configmaps
- group: authentication.k8s.io
resources:
- tokenreviews
# get repsonses can be large; skip them.
- level: request
omitstages:
- requestreceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io
verbs:
- get
- list
- watch # default level for known apis
- level: requestresponse
omitstages:
- requestreceived
resources:
- group: ""
- group: admissionregistration.k8s.io
- group: apiextensions.k8s.io
- group: apiregistration.k8s.io
- group: apps
- group: authentication.k8s.io
- group: authorization.k8s.io
- group: autoscaling
- group: batch
- group: certificates.k8s.io
- group: extensions
- group: metrics.k8s.io
- group: networking.k8s.io
- group: policy
- group: rbac.authorization.k8s.io
- group: scheduling.k8s.io
- group: settings.k8s.io
- group: storage.k8s.io # default level for all other requests.
- level: metadata
omitstages:
- requestreceived
eof 分发审计策略文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp audit-policy.yaml root@${node_master_ip}:/etc/kubernetes/audit-policy.yaml
done 4) 创建后续访问 metrics-server 使用的证书
创建证书签名请求:
[root@k8s-master01 work]# cat > proxy-client-csr.json <{
"cn": "aggregator",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "cn",
"st": "beijing",
"l": "beijing",
"o": "k8s",
"ou": "4paradigm"
}
]
}
eof cn 名称为 aggregator,需要与 metrics-server 的 --requestheader-allowed-names 参数配置一致,否则访问会被 metrics-server 拒绝; 生成证书和私钥:
[root@k8s-master01 work]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
-ca-key=/etc/kubernetes/cert/ca-key.pem \
-config=/etc/kubernetes/cert/ca-config.json \
-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client [root@k8s-master01 work]# ls proxy-client*.pem
proxy-client-key.pem proxy-client.pem 将生成的证书和私钥文件拷贝到所有 master 节点:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp proxy-client*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done 5) 创建 kube-apiserver systemd unit 模板文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-apiserver.service.template <[unit]
description=kubernetes api server
documentation=https://github.com/googlecloudplatform/kubernetes
after=network.target [service]
workingdirectory=${k8s_dir}/kube-apiserver
execstart=/opt/k8s/bin/kube-apiserver \\
--advertise-address=##node_master_ip## \\
--default-not-ready-toleration-seconds=360 \\
--default-unreachable-toleration-seconds=360 \\
--feature-gates=dynamicauditing=true \\
--max-mutating-requests-inflight=2000 \\
--max-requests-inflight=4000 \\
--default-watch-cache-size=200 \\
--delete-collection-workers=2 \\
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
--etcd-cafile=/etc/kubernetes/cert/ca.pem \\
--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
--etcd-servers=${etcd_endpoints} \\
--bind-address=##node_master_ip## \\
--secure-port=6443 \\
--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
--insecure-port=0 \\
--audit-dynamic-configuration \\
--audit-log-maxage=15 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-mode=batch \\
--audit-log-truncate-enabled \\
--audit-log-batch-buffer-size=20000 \\
--audit-log-batch-max-size=2 \\
--audit-log-path=${k8s_dir}/kube-apiserver/audit.log \\
--audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
--profiling \\
--anonymous-auth=false \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--enable-bootstrap-token-auth \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="x-remote-extra-" \\
--requestheader-group-headers=x-remote-group \\
--requestheader-username-headers=x-remote-user \\
--service-account-key-file=/etc/kubernetes/cert/ca.pem \\
--authorization-mode=node,rbac \\
--runtime-config=api/all=true \\
--enable-admission-plugins=noderestriction \\
--allow-privileged=true \\
--apiserver-count=3 \\
--event-ttl=168h \\
--kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
--kubelet-https=true \\
--kubelet-timeout=10s \\
--proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
--proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
--service-cluster-ip-range=${service_cidr} \\
--service-node-port-range=${node_port_range} \\
--logtostderr=true \\
--enable-aggregator-routing=true \\
--v=2
restart=on-failure
restartsec=10
type=notify
limitnofile=65536 [install]
wantedby=multi-user.target
eof 解释说明:
--advertise-address:apiserver 对外通告的 ip(kubernetes 服务后端节点 ip);
--default-*-toleration-seconds:设置节点异常相关的阈值;
--max-*-requests-inflight:请求相关的最大阈值;
--etcd-*:访问 etcd 的证书和 etcd 服务器地址;
--experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
--bind-address: https 监听的 ip,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
--secret-port:https 监听端口;
--insecure-port=0:关闭监听 http 非安全端口(8080);
--tls-*-file:指定 apiserver 使用的证书、私钥和 ca 文件;
--audit-*:配置审计策略和审计日志文件相关的参数;
--client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
--requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & hpa 需要使用;
--requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
如果 --requestheader-allowed-names 不为空,则--proxy-client-cert-file 证书的 cn 必须位于 allowed-names 中,默认为 aggregator;
--service-account-key-file:签名 serviceaccount token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
--runtime-config=api/all=true: 启用所有版本的 apis,如 autoscaling/v2alpha1;
--authorization-mode=node,rbac、--anonymous-auth=false: 开启 node 和 rbac 授权模式,拒绝未授权的请求;
--enable-admission-plugins:启用一些默认关闭的 plugins;
--allow-privileged:运行执行 privileged 权限的容器;
--apiserver-count=3:指定 apiserver 实例的数量;
--event-ttl:指定 events 的保存时间;
--kubelet-*:如果指定,则使用 https 访问 kubelet apis;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 rbac 规则,否则访问 kubelet api 时提示未授权;
--proxy-client-*:apiserver 访问 metrics-server 使用的证书;
--service-cluster-ip-range: 指定 service cluster ip 地址段;
--service-node-port-range: 指定 nodeport 的端口范围; 注意:
如果kube-apiserver机器没有运行 kube-proxy,则需要添加 --enable-aggregator-routing=true 参数(这里master节点没有作为node节点使用,故没有运行kube-proxy,需要加这个参数)
requestheader-client-ca-file 指定的 ca 证书,必须具有 client auth and server auth!! 为各节点创建和分发 kube-apiserver systemd unit 文件
替换模板文件中的变量,为各节点生成 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i ))
do
sed -e "s/##node_master_name##/${node_master_names[i]}/" -e "s/##node_master_ip##/${node_master_ips[i]}/" kube-apiserver.service.template > kube-apiserver-${node_master_ips[i]}.service
done 其中:node_names 和 node_ips 为相同长度的 bash 数组,分别为节点名称和对应的 ip; [root@k8s-master01 work]# ll kube-apiserver*.service
-rw-r--r-- 1 root root 2718 jun 18 10:38 kube-apiserver-172.16.60.241.service
-rw-r--r-- 1 root root 2718 jun 18 10:38 kube-apiserver-172.16.60.242.service
-rw-r--r-- 1 root root 2718 jun 18 10:38 kube-apiserver-172.16.60.243.service 分发生成的 systemd unit 文件, 文件重命名为 kube-apiserver.service;
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-apiserver-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-apiserver.service
done 6) 启动 kube-apiserver 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${k8s_dir}/kube-apiserver"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
done 注意:启动服务前必须先创建工作目录; 检查 kube-apiserver 运行状态
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-apiserver |grep 'active:'"
done 预期输出:
>>> 172.16.60.241
active: active (running) since tue 2019-06-18 10:42:42 cst; 1min 6s ago
>>> 172.16.60.242
active: active (running) since tue 2019-06-18 10:42:47 cst; 1min 2s ago
>>> 172.16.60.243
active: active (running) since tue 2019-06-18 10:42:51 cst; 58s ago 确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-apiserver) 7)打印 kube-apiserver 写入 etcd 的数据
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# etcdctl_api=3 etcdctl \
--endpoints=${etcd_endpoints} \
--cacert=/opt/k8s/work/ca.pem \
--cert=/opt/k8s/work/etcd.pem \
--key=/opt/k8s/work/etcd-key.pem \
get /registry/ --prefix --keys-only 预期会打印出很多写入到etcd中的数据信息 8)检查集群信息
[root@k8s-master01 work]# kubectl cluster-info
kubernetes master is running at https://172.16.60.250:8443
to further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-master01 work]# kubectl get all --all-namespaces
namespace name type cluster-ip external-ip port(s) age
default service/kubernetes clusterip 10.254.0.1443/tcp 8m25s 查看集群状态信息
[root@k8s-master01 work]# kubectl get componentstatuses #或者执行命令"kubectl get cs"
name status message error
controller-manager unhealthy get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler unhealthy get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 healthy {"health":"true"}
etcd-2 healthy {"health":"true"}
etcd-1 healthy {"health":"true"} controller-managerhe 和 schedule状态为unhealthy,是因为此时还没有部署这两个组件,待后续部署好之后再查看~ 这里注意:
-> 如果执行 kubectl 命令式时输出如下错误信息,则说明使用的 ~/.kube/config 文件不对,请切换到正确的账户后再执行该命令:
the connection to the server localhost:8080 was refused - did you specify the right host or port?
-> 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当 controller-manager、scheduler 以集群模式运行时,有可能和kube-apiserver
不在一台机器上,这时 controller-manager 或 scheduler 的状态为 unhealthy,但实际上它们工作正常。 9) 检查 kube-apiserver 监听的端口
[root@k8s-master01 work]# netstat -lnpt|grep kube
tcp 0 0 172.16.60.241:6443 0.0.0.0:* listen 15516/kube-apiserve 需要注意:
6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
由于关闭了非安全端口,故没有监听 8080; 10)授予 kube-apiserver 访问 kubelet api 的权限
在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。
这里定义 rbac 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(cn:kuberntes)访问 kubelet api 的权限: [root@k8s-master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes 11)查看kube-apiserver输出的metrics
需要用到根证书 使用nginx的代理端口获取metrics
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/metrics|head
# help apiserviceopenapiaggregationcontrollerqueue1_adds (deprecated) total number of adds handled by workqueue: apiserviceopenapiaggregationcontrollerqueue1
# type apiserviceopenapiaggregationcontrollerqueue1_adds counter
apiserviceopenapiaggregationcontrollerqueue1_adds 12194
# help apiserviceopenapiaggregationcontrollerqueue1_depth (deprecated) current depth of workqueue: apiserviceopenapiaggregationcontrollerqueue1
# type apiserviceopenapiaggregationcontrollerqueue1_depth gauge
apiserviceopenapiaggregationcontrollerqueue1_depth 0
# help apiserviceopenapiaggregationcontrollerqueue1_longest_running_processor_microseconds (deprecated) how many microseconds has the longest running processor for apiserviceopenapiaggregationcontrollerqueue1 been running.
# type apiserviceopenapiaggregationcontrollerqueue1_longest_running_processor_microseconds gauge
apiserviceopenapiaggregationcontrollerqueue1_longest_running_processor_microseconds 0
# help apiserviceopenapiaggregationcontrollerqueue1_queue_latency (deprecated) how long an item stays in workqueueapiserviceopenapiaggregationcontrollerqueue1 before being requested. 直接使用kube-apiserver节点端口获取metrics
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:6443/metrics|head
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.242:6443/metrics|head
[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.243:6443/metrics|head
8.2 - 部署高可用 kube-controller-manager 集群
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:与 kube-apiserver 的安全端口通信; 在安全端口(https,10252) 输出 prometheus 格式的 metrics;下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
1)创建 kube-controller-manager 证书和私钥
创建证书签名请求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-controller-manager-csr.json <{
"cn": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243"
],
"names": [
{
"c": "cn",
"st": "beijing",
"l": "beijing",
"o": "system:kube-controller-manager",
"ou": "4paradigm"
}
]
}
eof • hosts 列表包含所有 kube-controller-manager 节点 ip;
• cn 为 system:kube-controller-manager、o 为 system:kube-controller-manager,kubernetes 内置的 clusterrolebindings system:kube-controller-manager
赋予 kube-controller-manager 工作所需的权限。 生成证书和私钥
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager [root@k8s-master01 work]# ll kube-controller-manager*pem
-rw------- 1 root root 1679 jun 18 11:43 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1517 jun 18 11:43 kube-controller-manager.pem 将生成的证书和私钥分发到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done 2) 创建和分发 kubeconfig 文件
kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 ca 证书和 kube-controller-manager 证书:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${kube_apiserver} \
--kubeconfig=kube-controller-manager.kubeconfig [root@k8s-master01 work]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig [root@k8s-master01 work]# kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig [root@k8s-master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig 分发 kubeconfig 到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager.kubeconfig root@${node_master_ip}:/etc/kubernetes/
done 3) 创建和分发kube-controller-manager system unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-controller-manager.service.template <[unit]
description=kubernetes controller manager
documentation=https://github.com/googlecloudplatform/kubernetes [service]
workingdirectory=${k8s_dir}/kube-controller-manager
execstart=/opt/k8s/bin/kube-controller-manager \\
--profiling \\
--cluster-name=kubernetes \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kube-api-qps=1000 \\
--kube-api-burst=2000 \\
--leader-elect \\
--use-service-account-credentials=true \\
--concurrent-service-syncs=2 \\
--bind-address=0.0.0.0 \\
--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="x-remote-extra-" \\
--requestheader-group-headers=x-remote-group \\
--requestheader-username-headers=x-remote-user \\
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
--experimental-cluster-signing-duration=8760h \\
--horizontal-pod-autoscaler-sync-period=10s \\
--concurrent-deployment-syncs=10 \\
--concurrent-gc-syncs=30 \\
--node-cidr-mask-size=24 \\
--service-cluster-ip-range=${service_cidr} \\
--pod-eviction-timeout=6m \\
--terminated-pod-gc-threshold=10000 \\
--root-ca-file=/etc/kubernetes/cert/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--logtostderr=true \\
--v=2
restart=on-failure
restartsec=5 [install]
wantedby=multi-user.target
eof 解释说明:
下面两行一般要去掉,否则执行"kubectl get cs"检查集群状态时,controller-manager状态会为"unhealthy"
--port=0:关闭监听非安全端口(http),同时 --address 参数无效,--bind-address 参数有效;
--secure-port=10252 --bind-address=0.0.0.0: 在所有网络接口监听 10252 端口的 https /metrics 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
--authentication-kubeconfig 和 --authorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 --tls-ca-file 对请求 https metrics 的 client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。
--cluster-signing-*-file:签名 tls bootstrap 创建的证书;
--experimental-cluster-signing-duration:指定 tls bootstrap 证书的有效期;
--root-ca-file:放置到容器 serviceaccount 中的 ca 证书,用来对 kube-apiserver 的证书进行校验;
--service-account-private-key-file:签名 serviceaccount 中 token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
--service-cluster-ip-range :指定 service cluster ip 网段,必须和 kube-apiserver 中的同名参数一致;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 server 证书和秘钥;
--use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver; 为各节点创建和分发 kube-controller-mananger systemd unit 文件
替换模板文件中的变量,为各节点创建 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i ))
do
sed -e "s/##node_master_name##/${node_master_names[i]}/" -e "s/##node_master_ip##/${node_master_ips[i]}/" kube-controller-manager.service.template > kube-controller-manager-${node_master_ips[i]}.service
done 注意: node_names 和 node_ips 为相同长度的 bash 数组,分别为节点名称和对应的 ip; [root@k8s-master01 work]# ll kube-controller-manager*.service
-rw-r--r-- 1 root root 1878 jun 18 12:45 kube-controller-manager-172.16.60.241.service
-rw-r--r-- 1 root root 1878 jun 18 12:45 kube-controller-manager-172.16.60.242.service
-rw-r--r-- 1 root root 1878 jun 18 12:45 kube-controller-manager-172.16.60.243.service 分发到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-controller-manager-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-controller-manager.service
done 注意:文件重命名为 kube-controller-manager.service; 启动 kube-controller-manager 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${k8s_dir}/kube-controller-manager"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done 注意:启动服务前必须先创建工作目录; 检查服务运行状态
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-controller-manager|grep active"
done 预期输出结果:
>>> 172.16.60.241
active: active (running) since tue 2019-06-18 12:49:11 cst; 1min 7s ago
>>> 172.16.60.242
active: active (running) since tue 2019-06-18 12:49:11 cst; 1min 7s ago
>>> 172.16.60.243
active: active (running) since tue 2019-06-18 12:49:12 cst; 1min 7s ago 确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-controller-manager) kube-controller-manager 监听 10252 端口,接收 https 请求:
[root@k8s-master01 work]# netstat -lnpt|grep kube-controll
tcp 0 0 172.16.60.241:10252 0.0.0.0:* listen 25709/kube-controll 检查集群状态,controller-manager的状态为"ok"
注意:当kube-controller-manager集群中的1个或2个节点的controller-manager服务挂掉,只要有一个节点的controller-manager服务活着,
则集群中controller-manager的状态仍然为"ok",仍然会继续提供服务!
[root@k8s-master01 work]# kubectl get cs
name status message error
scheduler unhealthy get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager healthy ok
etcd-0 healthy {"health":"true"}
etcd-1 healthy {"health":"true"}
etcd-2 healthy {"health":"true"} 4) 查看输出的 metrics
注意:以下命令在3台kube-controller-manager节点上执行。 由于在kube-controller-manager启动文件中关掉了"--port=0"和"--secure-port=10252"这两个参数,则只能通过http方式获取到kube-controller-manager
输出的metrics信息。kube-controller-manager一般不会被访问,只有在监控时采集metrcis指标数据时被访问。 [root@k8s-master01 work]# curl -s http://172.16.60.241:10252/metrics|head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10252/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10252/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 ~]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem http://172.16.60.241:10252/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 5) kube-controller-manager 的权限
clusterole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 clusterrole system:controller:xxx 中: [root@k8s-master01 work]# kubectl describe clusterrole system:kube-controller-manager
name: system:kube-controller-manager
labels: kubernetes.io/bootstrapping=rbac-defaults
annotations: rbac.authorization.kubernetes.io/autoupdate: true
policyrule:
resources non-resource urls resource names verbs
--------- ----------------- -------------- -----
secrets [] [] [create delete get update]
endpoints [] [] [create get update]
serviceaccounts [] [] [create get update]
events [] [] [create patch update]
tokenreviews.authentication.k8s.io [] [] [create]
subjectaccessreviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get]
namespaces [] [] [get]
*.* [] [] [list watch] 需要在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 serviceaccount xxx-controller。
内置的 clusterrolebinding system:controller:xxx 将赋予各 xxx-controller serviceaccount 对应的 clusterrole system:controller:xxx 权限。 [root@k8s-master01 work]# kubectl get clusterrole|grep controller
system:controller:attachdetach-controller 141m
system:controller:certificate-controller 141m
system:controller:clusterrole-aggregation-controller 141m
system:controller:cronjob-controller 141m
system:controller:daemon-set-controller 141m
system:controller:deployment-controller 141m
system:controller:disruption-controller 141m
system:controller:endpoint-controller 141m
system:controller:expand-controller 141m
system:controller:generic-garbage-collector 141m
system:controller:horizontal-pod-autoscaler 141m
system:controller:job-controller 141m
system:controller:namespace-controller 141m
system:controller:node-controller 141m
system:controller:persistent-volume-binder 141m
system:controller:pod-garbage-collector 141m
system:controller:pv-protection-controller 141m
system:controller:pvc-protection-controller 141m
system:controller:replicaset-controller 141m
system:controller:replication-controller 141m
system:controller:resourcequota-controller 141m
system:controller:route-controller 141m
system:controller:service-account-controller 141m
system:controller:service-controller 141m
system:controller:statefulset-controller 141m
system:controller:ttl-controller 141m
system:kube-controller-manager 141m 以 deployment controller 为例:
[root@k8s-master01 work]# kubectl describe clusterrole system:controller:deployment-controller
name: system:controller:deployment-controller
labels: kubernetes.io/bootstrapping=rbac-defaults
annotations: rbac.authorization.kubernetes.io/autoupdate: true
policyrule:
resources non-resource urls resource names verbs
--------- ----------------- -------------- -----
replicasets.apps [] [] [create delete get list patch update watch]
replicasets.extensions [] [] [create delete get list patch update watch]
events [] [] [create patch update]
pods [] [] [get list update watch]
deployments.apps [] [] [get list update watch]
deployments.extensions [] [] [get list update watch]
deployments.apps/finalizers [] [] [update]
deployments.apps/status [] [] [update]
deployments.extensions/finalizers [] [] [update]
deployments.extensions/status [] [] [update] 6)查看kube-controller-manager集群中当前的leader
[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiversion: v1
kind: endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderidentity":"k8s-master02_4e449819-9185-11e9-82b6-005056ac42a4","leasedurationseconds":15,"acquiretime":"2019-06-18t04:55:49z","renewtime":"2019-06-18t05:04:54z","leadertransitions":3}'
creationtimestamp: "2019-06-18t04:03:07z"
name: kube-controller-manager
namespace: kube-system
resourceversion: "4604"
selflink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: fa824018-917d-11e9-90d4-005056ac7c81 可见,当前的leader为k8s-master02节点。 测试 kube-controller-manager 集群的高可用
停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。 比如停掉k8s-master02节点的kube-controller-manager 服务
[root@k8s-master02 ~]# systemctl stop kube-controller-manager
[root@k8s-master02 ~]# ps -ef|grep kube-controller-manager
root 25677 11006 0 13:06 pts/0 00:00:00 grep --color=auto kube-controller-manager 接着观察kube-controller-manager集群当前的leader情况
[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml
apiversion: v1
kind: endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderidentity":"k8s-master03_4e4c28b5-9185-11e9-b98a-005056ac7136","leasedurationseconds":15,"acquiretime":"2019-06-18t05:06:32z","renewtime":"2019-06-18t05:06:57z","leadertransitions":4}'
creationtimestamp: "2019-06-18t04:03:07z"
name: kube-controller-manager
namespace: kube-system
resourceversion: "4695"
selflink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
uid: fa824018-917d-11e9-90d4-005056ac7c81 发现当前leader已经转移到k8s-master03节点上了!!
8.3 - 部署高可用 kube-scheduler 集群
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:与kube-apiserver 的安全端口通信;在安全端口(https,10251) 输出 prometheus 格式的 metrics;下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
1)创建 kube-scheduler 证书和私钥
创建证书签名请求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-scheduler-csr.json <{
"cn": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"172.16.60.241",
"172.16.60.242",
"172.16.60.243"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "cn",
"st": "beijing",
"l": "beijing",
"o": "system:kube-scheduler",
"ou": "4paradigm"
}
]
}
eof 解释说明:
hosts 列表包含所有 kube-scheduler 节点 ip;
cn 和 o 均为 system:kube-scheduler,kubernetes 内置的 clusterrolebindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限; 生成证书和私钥:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler [root@k8s-master01 work]# ls kube-scheduler*pem
kube-scheduler-key.pem kube-scheduler.pem 将生成的证书和私钥分发到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler*.pem root@${node_master_ip}:/etc/kubernetes/cert/
done 2) 创建和分发 kubeconfig 文件
kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 ca 证书和 kube-scheduler 证书:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${kube_apiserver} \
--kubeconfig=kube-scheduler.kubeconfig [root@k8s-master01 work]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig [root@k8s-master01 work]# kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig [root@k8s-master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig 分发 kubeconfig 到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler.kubeconfig root@${node_master_ip}:/etc/kubernetes/
done 3) 创建 kube-scheduler 配置文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat >kube-scheduler.yaml.template <apiversion: kubescheduler.config.k8s.io/v1alpha1
kind: kubeschedulerconfiguration
bindtimeoutseconds: 600
clientconnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
qps: 100
enablecontentionprofiling: false
enableprofiling: true
hardpodaffinitysymmetricweight: 1
healthzbindaddress: 0.0.0.0:10251
leaderelection:
leaderelect: true
metricsbindaddress: 0.0.0.0:10251
eof 注意:这里的ip地址最好用0.0.0.0,不然执行"kubectl get cs"查看schedule的集群状态会是"unhealthy"
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态; 替换模板文件中的变量:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i ))
do
sed -e "s/##node_master_name##/${node_master_names[i]}/" -e "s/##node_master_ip##/${node_master_ips[i]}/" kube-scheduler.yaml.template > kube-scheduler-${node_master_ips[i]}.yaml
done 注意:node_names 和 node_ips 为相同长度的 bash 数组,分别为节点名称和对应的 ip; [root@k8s-master01 work]# ll kube-scheduler*.yaml
-rw-r--r-- 1 root root 399 jun 18 14:57 kube-scheduler-172.16.60.241.yaml
-rw-r--r-- 1 root root 399 jun 18 14:57 kube-scheduler-172.16.60.242.yaml
-rw-r--r-- 1 root root 399 jun 18 14:57 kube-scheduler-172.16.60.243.yaml 分发 kube-scheduler 配置文件到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler-${node_master_ip}.yaml root@${node_master_ip}:/etc/kubernetes/kube-scheduler.yaml
done 注意:重命名为 kube-scheduler.yaml; 4)创建 kube-scheduler systemd unit 模板文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-scheduler.service.template <[unit]
description=kubernetes scheduler
documentation=https://github.com/googlecloudplatform/kubernetes [service]
workingdirectory=${k8s_dir}/kube-scheduler
execstart=/opt/k8s/bin/kube-scheduler \\
--config=/etc/kubernetes/kube-scheduler.yaml \\
--bind-address=0.0.0.0 \\
--tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="x-remote-extra-" \\
--requestheader-group-headers=x-remote-group \\
--requestheader-username-headers=x-remote-user \\
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--logtostderr=true \\
--v=2
restart=always
restartsec=5
startlimitinterval=0 [install]
wantedby=multi-user.target
eof 为各节点创建和分发 kube-scheduler systemd unit 文件
替换模板文件中的变量,为各节点创建 systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i ))
do
sed -e "s/##node_master_name##/${node_master_names[i]}/" -e "s/##node_master_ip##/${node_master_ips[i]}/" kube-scheduler.service.template > kube-scheduler-${node_master_ips[i]}.service
done 其中:node_names 和 node_ips 为相同长度的 bash 数组,分别为节点名称和对应的 ip; [root@k8s-master01 work]# ll kube-scheduler*.service
-rw-r--r-- 1 root root 981 jun 18 15:30 kube-scheduler-172.16.60.241.service
-rw-r--r-- 1 root root 981 jun 18 15:30 kube-scheduler-172.16.60.242.service
-rw-r--r-- 1 root root 981 jun 18 15:30 kube-scheduler-172.16.60.243.service 分发 systemd unit 文件到所有 master 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
scp kube-scheduler-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-scheduler.service
done 5) 启动 kube-scheduler 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "mkdir -p ${k8s_dir}/kube-scheduler"
ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done 注意:启动服务前必须先创建工作目录; 检查服务运行状态
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_master_ip in ${node_master_ips[@]}
do
echo ">>> ${node_master_ip}"
ssh root@${node_master_ip} "systemctl status kube-scheduler|grep active"
done 预期输出结果:
>>> 172.16.60.241
active: active (running) since tue 2019-06-18 15:33:29 cst; 1min 12s ago
>>> 172.16.60.242
active: active (running) since tue 2019-06-18 15:33:30 cst; 1min 11s ago
>>> 172.16.60.243
active: active (running) since tue 2019-06-18 15:33:30 cst; 1min 11s ago 确保状态为 active (running),否则查看日志,确认原因: (journalctl -u kube-scheduler) 看看集群状态,此时状态均为"ok"
[root@k8s-master01 work]# kubectl get cs
name status message error
scheduler healthy ok
controller-manager healthy ok
etcd-2 healthy {"health":"true"}
etcd-0 healthy {"health":"true"}
etcd-1 healthy {"health":"true"} 6) 查看输出的 metrics
注意:以下命令要在kube-scheduler集群节点上执行。 kube-scheduler监听10251和10259端口:
10251:接收 http 请求,非安全端口,不需要认证授权;
10259:接收 https 请求,安全端口,需要认证授权;
两个接口都对外提供 /metrics 和 /healthz 的访问。 [root@k8s-master01 work]# netstat -lnpt |grep kube-schedule
tcp6 0 0 :::10251 :::* listen 6075/kube-scheduler
tcp6 0 0 :::10259 :::* listen 6075/kube-scheduler [root@k8s-master01 work]# lsof -i:10251
command pid user fd type device size/off node name
kube-sche 6075 root 3u ipv6 628571 0t0 tcp *:10251 (listen) [root@k8s-master01 work]# lsof -i:10259
command pid user fd type device size/off node name
kube-sche 6075 root 5u ipv6 628574 0t0 tcp *:10259 (listen) 下面几种方式均能获取到kube-schedule的metrics数据信息(分别使用http的10251 和 https的10259端口)
[root@k8s-master01 work]# curl -s http://172.16.60.241:10251/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s http://127.0.0.1:10251/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10251/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10251/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 [root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:10259/metrics |head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 7)查看当前的 leader
[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiversion: v1
kind: endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderidentity":"k8s-master01_5eac29d7-919b-11e9-b242-005056ac7c81","leasedurationseconds":15,"acquiretime":"2019-06-18t07:33:31z","renewtime":"2019-06-18t07:41:13z","leadertransitions":0}'
creationtimestamp: "2019-06-18t07:33:31z"
name: kube-scheduler
namespace: kube-system
resourceversion: "12218"
selflink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 5f466875-919b-11e9-90d4-005056ac7c81 可见,当前的 leader 为 k8s-master01 节点。 测试 kube-scheduler 集群的高可用
随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限。 比如停掉k8s-master01节点的kube-schedule服务,查看下leader的转移情况
[root@k8s-master01 work]# systemctl stop kube-scheduler
[root@k8s-master01 work]# ps -ef|grep kube-scheduler
root 6871 2379 0 15:42 pts/2 00:00:00 grep --color=auto kube-scheduler 再次看看当前的leader,发现leader已经转移为k8s-master02节点了
[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml
apiversion: v1
kind: endpoints
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderidentity":"k8s-master02_5efade79-919b-11e9-bbe2-005056ac42a4","leasedurationseconds":15,"acquiretime":"2019-06-18t07:43:03z","renewtime":"2019-06-18t07:43:12z","leadertransitions":1}'
creationtimestamp: "2019-06-18t07:33:31z"
name: kube-scheduler
namespace: kube-system
resourceversion: "12363"
selflink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
uid: 5f466875-919b-11e9-90d4-005056ac7c81
九、部署node工作节点
kubernetes node节点运行的组件有docker、kubelet、kube-proxy、flanneld。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
安装依赖包
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "yum install -y epel-release"
ssh root@${node_node_ip} "yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs "
done
9.1 - 部署 docker 组件
docker 运行和管理容器,kubelet 通过 container runtime interface (cri) 与它进行交互。下面操作均在k8s-master01上执行,然后远程分发文件和执行命令。
1) 下载和分发 docker 二进制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
[root@k8s-master01 work]# tar -xvf docker-18.09.6.tgz 分发二进制文件到所有node节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
scp docker/* root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod x /opt/k8s/bin/*"
done 2) 创建和分发 systemd unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > docker.service <<"eof"
[unit]
description=docker application container engine
documentation=http://docs.docker.io [service]
workingdirectory=##docker_dir##
environment="path=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
environmentfile=-/run/flannel/docker
execstart=/opt/k8s/bin/dockerd $docker_network_options
execreload=/bin/kill -s hup $mainpid
restart=on-failure
restartsec=5
limitnofile=infinity
limitnproc=infinity
limitcore=infinity
delegate=yes
killmode=process [install]
wantedby=multi-user.target
eof 注意事项:
-> eof 前后有双引号,这样 bash 不会替换文档中的变量,如 $docker_network_options (这些环境变量是 systemd 负责替换的。);
-> dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 path 环境变量中;
-> flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 docker_network_options ,然后设置 docker0 网桥网段;
-> 如果指定了多个 environmentfile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);
-> docker 需要以 root 用于运行;
-> docker 从 1.13 版本开始,可能将 iptables forward chain的默认策略设置为drop,从而导致 ping 其它 node 上的 pod ip 失败,遇到这种情况时,需要手动设置策略为 accept:
# iptables -p forward accept
并且把以下命令写入 /etc/rc.local 文件中,防止节点重启iptables forward chain的默认策略又还原为drop
# /sbin/iptables -p forward accept 分发 systemd unit 文件到所有node节点机器:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# sed -i -e "s|##docker_dir##|${docker_dir}|" docker.service
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
scp docker.service root@${node_node_ip}:/etc/systemd/system/
done 3) 配置和分发 docker 配置文件
使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > docker-daemon.json <{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
"insecure-registries": ["docker02:35000"],
"max-concurrent-downloads": 20,
"live-restore": true,
"max-concurrent-uploads": 10,
"debug": true,
"data-root": "${docker_dir}/data",
"exec-root": "${docker_dir}/exec",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
eof 分发 docker 配置文件到所有 node 节点:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p /etc/docker/ ${docker_dir}/{data,exec}"
scp docker-daemon.json root@${node_node_ip}:/etc/docker/daemon.json
done 4) 启动 docker 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"
done 检查服务运行状态
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl status docker|grep active"
done 预期输出结果:
>>> 172.16.60.244
active: active (running) since tue 2019-06-18 16:28:32 cst; 42s ago
>>> 172.16.60.245
active: active (running) since tue 2019-06-18 16:28:31 cst; 42s ago
>>> 172.16.60.246
active: active (running) since tue 2019-06-18 16:28:32 cst; 42s ago 确保状态为 active (running),否则查看日志,确认原因 (journalctl -u docker) 5) 检查 docker0 网桥
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
done 预期输出结果:
>>> 172.16.60.244
3: flannel.1:mtu 1450 qdisc noqueue state unknown group default
link/ether c6:c2:d1:5a:9a:8a brd ff:ff:ff:ff:ff:ff
inet 172.30.88.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0:mtu 1500 qdisc noqueue state down group default
link/ether 02:42:27:3c:5e:5f brd ff:ff:ff:ff:ff:ff
inet 172.30.88.1/21 brd 172.30.95.255 scope global docker0
valid_lft forever preferred_lft forever
>>> 172.16.60.245
3: flannel.1:mtu 1450 qdisc noqueue state unknown group default
link/ether 02:36:1d:ab:c4:86 brd ff:ff:ff:ff:ff:ff
inet 172.30.56.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0:mtu 1500 qdisc noqueue state down group default
link/ether 02:42:6f:36:7d:fb brd ff:ff:ff:ff:ff:ff
inet 172.30.56.1/21 brd 172.30.63.255 scope global docker0
valid_lft forever preferred_lft forever
>>> 172.16.60.246
3: flannel.1:mtu 1450 qdisc noqueue state unknown group default
link/ether 4e:73:d1:0e:27:c0 brd ff:ff:ff:ff:ff:ff
inet 172.30.72.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
4: docker0:mtu 1500 qdisc noqueue state down group default
link/ether 02:42:21:39:f4:9e brd ff:ff:ff:ff:ff:ff
inet 172.30.72.1/21 brd 172.30.79.255 scope global docker0
valid_lft forever preferred_lft forever 确认各node节点的docker0网桥和flannel.1接口的ip一定要处于同一个网段中(如下 172.30.88.0/32 位于 172.30.88.1/21 中)!!! 到任意一个node节点上查看 docker 的状态信息
[root@k8s-node01 ~]# ps -elfh|grep docker
0 s root 21573 18744 0 80 0 - 28180 pipe_w 16:32 pts/2 00:00:00 grep --color=auto docker
4 s root 21147 1 0 80 0 - 173769 futex_ 16:28 ? 00:00:00 /opt/k8s/bin/dockerd --bip=172.30.88.1/21 --ip-masq=false --mtu=1450
4 s root 21175 21147 0 80 0 - 120415 futex_ 16:28 ? 00:00:00 containerd --config /data/k8s/docker/exec/containerd/containerd.toml --log-level debug [root@k8s-node01 ~]# docker info
containers: 0
running: 0
paused: 0
stopped: 0
images: 0
server version: 18.09.6
storage driver: overlay2
backing filesystem: xfs
supports d_type: true
native overlay diff: true
logging driver: json-file
cgroup driver: cgroupfs
plugins:
volume: local
network: bridge host macvlan null overlay
log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
swarm: inactive
runtimes: runc
default runtime: runc
init binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
security options:
seccomp
profile: default
kernel version: 4.4.181-1.el7.elrepo.x86_64
operating system: centos linux 7 (core)
ostype: linux
architecture: x86_64
cpus: 4
total memory: 3.859gib
name: k8s-node01
id: r24d:75e5:2ows:snu5:npse:sbkh:wklz:2zh7:6ity:3be2:yhrg:6wru
docker root dir: /data/k8s/docker/data
debug mode (client): false
debug mode (server): true
file descriptors: 22
goroutines: 43
system time: 2019-06-18t16:32:44.260301822 08:00
eventslisteners: 0
registry: https://index.docker.io/v1/
labels:
experimental: false
insecure registries:
docker02:35000
127.0.0.0/8
registry mirrors:
https://docker.mirrors.ustc.edu.cn/
https://hub-mirror.c.163.com/
live restore enabled: true
product license: community engine
9.2 - 部署 kubelet 组件
kubelet 运行在每个node节点上,接收 kube-apiserver 发送的请求,管理 pod 容器,执行交互式命令,如 exec、run、logs 等。kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
1)下载和分发 kubelet 二进制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
scp kubernetes/server/bin/kubelet root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod x /opt/k8s/bin/*"
done 2)创建 kubelet bootstrap kubeconfig 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo ">>> ${node_node_name}" # 创建 token
export bootstrap_token=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:${node_node_name} \
--kubeconfig ~/.kube/config) # 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/cert/ca.pem \
--embed-certs=true \
--server=${kube_apiserver} \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig # 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${bootstrap_token} \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig # 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig # 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig
done 解释说明: 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书; 查看 kubeadm 为各节点创建的 token:
[root@k8s-master01 work]# kubeadm token list --kubeconfig ~/.kube/config
token ttl expires usages description extra groups
0zqowl.aye8f834jtq9vm9t 23h 2019-06-19t16:50:43 08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node03
b46tq2.muab337gxwl0dsqn 23h 2019-06-19t16:50:43 08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node02
heh41x.foguhh1qa5crpzlq 23h 2019-06-19t16:50:42 08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-node01 解释说明:
-> token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
-> kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,
后续将为这个 group 设置 clusterrolebinding; 查看各 token 关联的 secret:
[root@k8s-master01 work]# kubectl get secrets -n kube-system|grep bootstrap-token
bootstrap-token-0zqowl bootstrap.kubernetes.io/token 7 88s
bootstrap-token-b46tq2 bootstrap.kubernetes.io/token 7 88s
bootstrap-token-heh41x bootstrap.kubernetes.io/token 7 89s 3) 分发 bootstrap kubeconfig 文件到所有node节点
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo ">>> ${node_node_name}"
scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
done 4) 创建和分发 kubelet 参数配置文件
从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示:
deprecated: this parameter should be set via the config file specified by the kubelet's --config flag 创建 kubelet 参数配置文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet-config.yaml.template <kind: kubeletconfiguration
apiversion: kubelet.config.k8s.io/v1beta1
address: "##node_node_ip##"
staticpodpath: ""
syncfrequency: 1m
filecheckfrequency: 20s
httpcheckfrequency: 20s
staticpodurl: ""
port: 10250
readonlyport: 0
rotatecertificates: true
servertlsbootstrap: true
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientcafile: "/etc/kubernetes/cert/ca.pem"
authorization:
mode: webhook
registrypullqps: 0
registryburst: 20
eventrecordqps: 0
eventburst: 20
enabledebugginghandlers: true
enablecontentionprofiling: true
healthzport: 10248
healthzbindaddress: "##node_node_ip##"
clusterdomain: "${cluster_dns_domain}"
clusterdns:
- "${cluster_dns_svc_ip}"
nodestatusupdatefrequency: 10s
nodestatusreportfrequency: 1m
imageminimumgcage: 2m
imagegchighthresholdpercent: 85
imagegclowthresholdpercent: 80
volumestatsaggperiod: 1m
kubeletcgroups: ""
systemcgroups: ""
cgrouproot: ""
cgroupsperqos: true
cgroupdriver: cgroupfs
runtimerequesttimeout: 10m
hairpinmode: promiscuous-bridge
maxpods: 220
podcidr: "${cluster_cidr}"
podpidslimit: -1
resolvconf: /etc/resolv.conf
maxopenfiles: 1000000
kubeapiqps: 1000
kubeapiburst: 2000
serializeimagepulls: false
evictionhard:
memory.available: "100mi"
nodefs.available: "10%"
nodefs.inodesfree: "5%"
imagefs.available: "15%"
evictionsoft: {}
enablecontrollerattachdetach: true
failswapon: true
containerlogmaxsize: 20mi
containerlogmaxfiles: 10
systemreserved: {}
kubereserved: {}
systemreservedcgroup: ""
kubereservedcgroup: ""
enforcenodeallocatable: ["pods"]
eof 解释说明:
-> address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 api;
-> readonlyport=0:关闭只读端口(默认 10255),等效为未指定;
-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
-> authentication.x509.clientcafile:指定签名客户端证书的 ca 证书,开启 http 证书认证;
-> authentication.webhook.enabled=true:开启 https bearer token 认证;
-> 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 unauthorized;
-> authroization.mode=webhook:kubelet 使用 subjectaccessreview api 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(rbac);
-> featuregates.rotatekubeletclientcertificate、featuregates.rotatekubeletservercertificate:自动 rotate 证书,证书的有效期取决于
kube-controller-manager 的 --experimental-cluster-signing-duration 参数;
-> 需要 root 账户运行; 为各节点创建和分发 kubelet 配置文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
sed -e "s/##node_node_ip##/${node_node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_node_ip}.yaml.template
scp kubelet-config-${node_node_ip}.yaml.template root@${node_node_ip}:/etc/kubernetes/kubelet-config.yaml
done 5)创建和分发 kubelet systemd unit 文件
创建 kubelet systemd unit 文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kubelet.service.template <[unit]
description=kubernetes kubelet
documentation=https://github.com/googlecloudplatform/kubernetes
after=docker.service
requires=docker.service [service]
workingdirectory=${k8s_dir}/kubelet
execstart=/opt/k8s/bin/kubelet \\
--allow-privileged=true \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--cert-dir=/etc/kubernetes/cert \\
--cni-conf-dir=/etc/cni/net.d \\
--container-runtime=docker \\
--container-runtime-endpoint=unix:///var/run/dockershim.sock \\
--root-dir=${k8s_dir}/kubelet \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yaml \\
--hostname-override=##node_node_name## \\
--pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
--image-pull-progress-deadline=15m \\
--volume-plugin-dir=${k8s_dir}/kubelet/kubelet-plugins/volume/exec/ \\
--logtostderr=true \\
--v=2
restart=always
restartsec=5
startlimitinterval=0 [install]
wantedby=multi-user.target
eof 解释说明:
-> 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 node 的情况;
-> --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 tls bootstrapping 请求;
-> k8s approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
-> --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸; 为各节点创建和分发 kubelet systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo ">>> ${node_node_name}"
sed -e "s/##node_node_name##/${node_node_name}/" kubelet.service.template > kubelet-${node_node_name}.service
scp kubelet-${node_node_name}.service root@${node_node_name}:/etc/systemd/system/kubelet.service
done 6)bootstrap token auth 和授予权限
-> kubelet启动时查找--kubeletconfig参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (csr)。
-> kube-apiserver 收到 csr 请求后,对其中的 token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,
这一过程称为 bootstrap token auth。
-> 默认情况下,这个 user 和 group 没有创建 csr 的权限,kubelet 启动失败,错误日志如下:
# journalctl -u kubelet -a |grep -a 2 'certificatesigningrequests'
may 9 22:48:41 k8s-master01 kubelet[128468]: i0526 22:48:41.798230 128468 certificate_manager.go:366] rotating certificates
may 9 22:48:41 k8s-master01 kubelet[128468]: e0526 22:48:41.801997 128468 certificate_manager.go:385] failed while requesting a signed certificate from the master: cannot cre
ate certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: user "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" i
n api group "certificates.k8s.io" at the cluster scope
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.044828 128468 kubelet.go:2244] node "k8s-master01" not found
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.078658 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: failed to list *v1.service: unauthor
ized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.079873 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: failed to list *v1.node: unauthorize
d
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.082683 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: failed to list *v1beta1.csidriver: unau
thorized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.084473 128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: failed to list *v1.pod: unau
thorized
may 9 22:48:42 k8s-master01 kubelet[128468]: e0526 22:48:42.088466 128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: failed to list *v1beta1.runtimeclass: u
nauthorized 解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:
# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers 7) 启动 kubelet 服务
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p ${k8s_dir}/kubelet/kubelet-plugins/volume/exec/"
ssh root@${node_node_ip} "/usr/sbin/swapoff -a"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
done 解释说明:
-> 启动服务前必须先创建工作目录;
-> 关闭 swap 分区,否则 kubelet 会启动失败 (使用"journalctl -u kubelet |tail"命令查看错误日志) kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 csr 请求,
当这个 csr 被 approve 后,kube-controller-manager 为 kubelet 创建 tls 客户端证书、私钥和 --kubeletconfig 文件。 注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 tls bootstrap 创建证书和私钥。 [root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4wk6q 108s system:bootstrap:0zqowl pending
csr-mjtl5 110s system:bootstrap:heh41x pending
csr-rfz27 109s system:bootstrap:b46tq2 pending [root@k8s-master01 work]# kubectl get nodes
no resources found. 此时三个node节点的csr均处于 pending 状态; 8)自动 approve csr 请求
创建三个 clusterrolebinding,分别用于自动 approve client、renew client、renew server 证书:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > csr-crb.yaml <# approve all csrs for the group "system:bootstrappers"
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: group
name: system:bootstrappers
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apigroup: rbac.authorization.k8s.io
---
# to let a node of the group "system:nodes" renew its own credentials
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: node-client-cert-renewal
subjects:
- kind: group
name: system:nodes
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apigroup: rbac.authorization.k8s.io
---
# a clusterrole which instructs the csr approver to approve a node requesting a
# serving cert matching its client cert.
kind: clusterrole
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: approve-node-server-renewal-csr
rules:
- apigroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests/selfnodeserver"]
verbs: ["create"]
---
# to let a node of the group "system:nodes" renew its own server credentials
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: node-server-cert-renewal
subjects:
- kind: group
name: system:nodes
apigroup: rbac.authorization.k8s.io
roleref:
kind: clusterrole
name: approve-node-server-renewal-csr
apigroup: rbac.authorization.k8s.io
eof 解释说明:
-> auto-approve-csrs-for-group:自动 approve node 的第一次 csr; 注意第一次 csr 时,请求的 group 为 system:bootstrappers;
-> node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 group 为 system:nodes;
-> node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 group 为 system:nodes; 执行创建:
[root@k8s-master01 work]# kubectl apply -f csr-crb.yaml 查看 kubelet 的情况 需要耐心等待一段时间(1-10 分钟),三个节点的 csr 都被自动 approved(测试时等待了很长一段时间才被自动approved)
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 37s system:node:k8s-node01 pending
csr-4wk6q 7m29s system:bootstrap:0zqowl approved,issued
csr-h8hq6 36s system:node:k8s-node02 pending
csr-mjtl5 7m31s system:bootstrap:heh41x approved,issued
csr-rfz27 7m30s system:bootstrap:b46tq2 approved,issued
csr-t9p6n 36s system:node:k8s-node03 pending 注意:
pending 的 csr 用于创建 kubelet server 证书,需要手动 approve,后续会说到这个。 此时发现所有node节点状态均为"ready":
[root@k8s-master01 work]# kubectl get nodes
name status roles age version
k8s-node01 ready3m v1.14.2
k8s-node02 ready3m v1.14.2
k8s-node03 ready2m59s v1.14.2 kube-controller-manager 为各node节点生成了 kubeconfig 文件和公私钥(如下在node节点上执行):
[root@k8s-node01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2310 jun 18 17:09 /etc/kubernetes/kubelet.kubeconfig [root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
-rw------- 1 root root 1273 jun 18 17:16 kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 jun 18 17:16 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem 注意:此时还没有自动生成 kubelet server 证书; 9)手动 approve server cert csr
基于安全性考虑,csr approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve: [root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 6m4s system:node:k8s-node01 pending
csr-4wk6q 12m system:bootstrap:0zqowl approved,issued
csr-h8hq6 6m3s system:node:k8s-node02 pending
csr-mjtl5 12m system:bootstrap:heh41x approved,issued
csr-rfz27 12m system:bootstrap:b46tq2 approved,issued
csr-t9p6n 6m3s system:node:k8s-node03 pending 记住上面执行结果为"pending"的对应的csr的name名称,然后对这些csr进行手动approve
[root@k8s-master01 work]# kubectl certificate approve csr-4m4hc
certificatesigningrequest.certificates.k8s.io/csr-4m4hc approved [root@k8s-master01 work]# kubectl certificate approve csr-h8hq6
certificatesigningrequest.certificates.k8s.io/csr-h8hq6 approved [root@k8s-master01 work]# kubectl certificate approve csr-t9p6n
certificatesigningrequest.certificates.k8s.io/csr-t9p6n approved 再次查看csr,发现所有的csr都为approved了
[root@k8s-master01 work]# kubectl get csr
name age requestor condition
csr-4m4hc 7m46s system:node:k8s-node01 approved,issued
csr-4wk6q 14m system:bootstrap:0zqowl approved,issued
csr-h8hq6 7m45s system:node:k8s-node02 approved,issued
csr-mjtl5 14m system:bootstrap:heh41x approved,issued
csr-rfz27 14m system:bootstrap:b46tq2 approved,issued
csr-t9p6n 7m45s system:node:k8s-node03 approved,issued 再次到node节点上查看,发现已经自动生成 kubelet server 证书;
[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1273 jun 18 17:16 /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
lrwxrwxrwx 1 root root 59 jun 18 17:16 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem
-rw------- 1 root root 1317 jun 18 17:23 /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem
lrwxrwxrwx 1 root root 59 jun 18 17:23 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem 10)kubelet 提供的 api 接口
kubelet 启动后监听多个端口,用于接收 kube-apiserver 或其它客户端发送的请求: 在node节点执行下面命令
[root@k8s-node01 ~]# netstat -lnpt|grep kubelet
tcp 0 0 127.0.0.1:40831 0.0.0.0:* listen 24468/kubelet
tcp 0 0 172.16.60.244:10248 0.0.0.0:* listen 24468/kubelet
tcp 0 0 172.16.60.244:10250 0.0.0.0:* listen 24468/kubelet 解释说明:
-> 10248: healthz http服务端口,即健康检查服务的端口
-> 10250: kubelet服务监听的端口,api会检测他是否存活。即https服务,访问该端口时需要认证和授权(即使访问/healthz也需要);
-> 10255:只读端口,可以不用验证和授权机制,直接访问。这里配置"readonlyport: 0"表示未开启只读端口10255;如果配置"readonlyport: 10255"则打开10255端口
-> 从 k8s v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cadvisor ui & api。 例如执行"kubectl exec -it nginx-ds-5aedg -- sh"命令时,kube-apiserver会向 kubelet 发送如下请求:
post /exec/default/nginx-ds-5aedg/my-nginx?command=sh&input=1&output=1&tty=1 kubelet 接收 10250 端口的 https 请求,可以访问如下资源:
-> /pods、/runningpods
-> /metrics、/metrics/cadvisor、/metrics/probes
-> /spec
-> /stats、/stats/container
-> /logs
-> /run/、/exec/, /attach/, /portforward/, /containerlogs/ 由于关闭了匿名认证,同时开启了webhook 授权,所有访问10250端口https api的请求都需要被认证和授权。
预定义的 clusterrole system:kubelet-api-admin 授予访问 kubelet 所有 api 的权限(kube-apiserver 使用的 kubernetes 证书 user 授予了该权限): [root@k8s-master01 work]# kubectl describe clusterrole system:kubelet-api-admin
name: system:kubelet-api-admin
labels: kubernetes.io/bootstrapping=rbac-defaults
annotations: rbac.authorization.kubernetes.io/autoupdate: true
policyrule:
resources non-resource urls resource names verbs
--------- ----------------- -------------- -----
nodes/log [] [] [*]
nodes/metrics [] [] [*]
nodes/proxy [] [] [*]
nodes/spec [] [] [*]
nodes/stats [] [] [*]
nodes [] [] [get list watch proxy] 11) kubelet api 认证和授权
kubelet 配置了如下认证参数:
-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
-> authentication.x509.clientcafile:指定签名客户端证书的 ca 证书,开启 https 证书认证;
-> authentication.webhook.enabled=true:开启 https bearer token 认证; 同时配置了如下授权参数:
-> authroization.mode=webhook:开启 rbac 授权; kubelet 收到请求后,使用 clientcafile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 unauthorized:
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.16.60.244:10250/metrics
unauthorized [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -h "authorization: bearer 123456" https://172.16.60.244:10250/metrics
unauthorized 通过认证后,kubelet 使用 subjectaccessreview api 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(rbac); 下面进行证书认证和授权: # 权限不足的证书;
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.16.60.244:10250/metrics
forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics) # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics|head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 注意:--cacert、--cert、--key 的参数值必须是文件路径,否则返回 401 unauthorized; bear token 认证和授权
创建一个 serviceaccount,将它和 clusterrole system:kubelet-api-admin 绑定,从而具有调用 kubelet api 的权限:
[root@k8s-master01 work]# kubectl create sa kubelet-api-test
[root@k8s-master01 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
[root@k8s-master01 work]# secret=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@k8s-master01 work]# token=$(kubectl describe secret ${secret} | grep -e '^token' | awk '{print $2}')
[root@k8s-master01 work]# echo ${token}
eyjhbgcioijsuzi1niisimtpzci6iij9.eyjpc3mioijrdwjlcm5ldgvzl3nlcnzpy2vhy2nvdw50iiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9uyw1lc3bhy2uioijkzwzhdwx0iiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9zzwnyzxqubmftzsi6imt1ymvszxqtyxbplxrlc3qtdg9rzw4tanrymneilcjrdwjlcm5ldgvzlmlvl3nlcnzpy2vhy2nvdw50l3nlcnzpy2utywnjb3vudc5uyw1lijoia3vizwxldc1hcgktdgvzdcisimt1ymvybmv0zxmuaw8vc2vydmljzwfjy291bnqvc2vydmljzs1hy2nvdw50lnvpzci6imrjyjljzte0ltkxywmtmtflos05mgq0ltawnta1nmfjn2m4msisinn1yii6inn5c3rlbtpzzxj2awnlywnjb3vuddpkzwzhdwx0omt1ymvszxqtyxbplxrlc3qifq.i_uvqjoumldg4ldurfhxfdotm2addxgequqtcpolp_5g6ui-mjve5jhem_q8otmwfs5tqlcvkjhn2idfsrikk_mbe_yslqsneohdclzwhrvn6x84y62q49y-art12ylspfwwenw-2gawstmorbz7ayyau5-kgqmk95mmx57ic8uwvjylilw4jcnkmon5esomgaog30uvvsbiqvkkytwgtag5tah9wadujqttbjjdolgntpghxj-hmzo2givdgdrbs_unvhzgt2madlpp13qyv8zkibgpsbiwoak_olsfkq5-dirn04ncbh9kkyyh9jccmsuvepaj-lgtwj5zdufrhw 这时,再接着进行kubelet请求
[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -h "authorization: bearer ${token}" https://172.16.60.244:10250/metrics|head
# help apiserver_audit_event_total counter of audit events generated and sent to the audit backend.
# type apiserver_audit_event_total counter
apiserver_audit_event_total 0
# help apiserver_audit_requests_rejected_total counter of apiserver requests rejected due to an error in audit logging backend.
# type apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# help apiserver_client_certificate_expiration_seconds distribution of the remaining lifetime on the certificate used to authenticate a request.
# type apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0 12)cadvisor 和 metrics
cadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(cpu、内存、磁盘、网卡)使用情况的服务。
浏览器访问https://172.16.60.244:10250/metrics 和 https://172.16.60.244:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。 注意:
-> kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
-> 参考下面的"浏览器访问kube-apiserver安全端口",创建和导入相关证书,然后就可以在浏览器里成功访问kube-apiserver和上面的kubelet的10250端口了。 需要通过证书方式访问kubelet的10250端口
[root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics [root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics/cadvisor 13)获取 kubelet 的配置
从 kube-apiserver 获取各节点 kubelet 的配置:
如果发现没有jq命令(json处理工具),可以直接yum安装jq:
[root@k8s-master01 ~]# yum install -y jq 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 ~]# curl -ssl --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem ${kube_apiserver}/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="kubeletconfiguration"|.apiversion="kubelet.config.k8s.io/v1beta1"'
{
"syncfrequency": "1m0s",
"filecheckfrequency": "20s",
"httpcheckfrequency": "20s",
"address": "172.16.60.244",
"port": 10250,
"rotatecertificates": true,
"servertlsbootstrap": true,
"authentication": {
"x509": {
"clientcafile": "/etc/kubernetes/cert/ca.pem"
},
"webhook": {
"enabled": true,
"cachettl": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "webhook",
"webhook": {
"cacheauthorizedttl": "5m0s",
"cacheunauthorizedttl": "30s"
}
},
"registrypullqps": 0,
"registryburst": 20,
"eventrecordqps": 0,
"eventburst": 20,
"enabledebugginghandlers": true,
"enablecontentionprofiling": true,
"healthzport": 10248,
"healthzbindaddress": "172.16.60.244",
"oomscoreadj": -999,
"clusterdomain": "cluster.local",
"clusterdns": [
"10.254.0.2"
],
"streamingconnectionidletimeout": "4h0m0s",
"nodestatusupdatefrequency": "10s",
"nodestatusreportfrequency": "1m0s",
"nodeleasedurationseconds": 40,
"imageminimumgcage": "2m0s",
"imagegchighthresholdpercent": 85,
"imagegclowthresholdpercent": 80,
"volumestatsaggperiod": "1m0s",
"cgroupsperqos": true,
"cgroupdriver": "cgroupfs",
"cpumanagerpolicy": "none",
"cpumanagerreconcileperiod": "10s",
"runtimerequesttimeout": "10m0s",
"hairpinmode": "promiscuous-bridge",
"maxpods": 220,
"podcidr": "172.30.0.0/16",
"podpidslimit": -1,
"resolvconf": "/etc/resolv.conf",
"cpucfsquota": true,
"cpucfsquotaperiod": "100ms",
"maxopenfiles": 1000000,
"contenttype": "application/vnd.kubernetes.protobuf",
"kubeapiqps": 1000,
"kubeapiburst": 2000,
"serializeimagepulls": false,
"evictionhard": {
"memory.available": "100mi"
},
"evictionpressuretransitionperiod": "5m0s",
"enablecontrollerattachdetach": true,
"makeiptablesutilchains": true,
"iptablesmasqueradebit": 14,
"iptablesdropbit": 15,
"failswapon": true,
"containerlogmaxsize": "20mi",
"containerlogmaxfiles": 10,
"configmapandsecretchangedetectionstrategy": "watch",
"enforcenodeallocatable": [
"pods"
],
"kind": "kubeletconfiguration",
"apiversion": "kubelet.config.k8s.io/v1beta1"
} 或者直接执行下面语句:(https://172.16.60.250:8443 就是变量${kube_apiserver})
[root@k8s-master01 ~]# curl -ssl --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="kubeletconfiguration"|.apiversion="kubelet.config.k8s.io/v1beta1"'
[root@k8s-master01 ~]# curl -ssl --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node02/proxy/configz | jq '.kubeletconfig|.kind="kubeletconfiguration"|.apiversion="kubelet.config.k8s.io/v1beta1"'
[root@k8s-master01 ~]# curl -ssl --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node03/proxy/configz | jq '.kubeletconfig|.kind="kubeletconfiguration"|.apiversion="kubelet.config.k8s.io/v1beta1"'
9.3 - 浏览器访问kube-apiserver等安全端口,创建和导入证书的做法
浏览器访问 kube-apiserver 的安全端口 6443 (代理端口是8443)时,提示证书不被信任:
这是因为 kube-apiserver 的 server 证书是我们创建的根证书 ca.pem 签名的,需要将根证书 ca.pem 导入操作系统,并设置永久信任。
这里说下mac os系统客户机上导入证书的方法:
1)点击mac本上的"钥匙串访问" -> "系统" -> "证书" -> "kebernetes"(双击里面的"信任",改成"始终信任"),如下图:
清除浏览器缓存,再次访问,发现证书已经被信任了!(红色感叹号已经消失了)
2)需要给浏览器生成一个 client 证书,访问 apiserver 的 6443 https 端口时使用。这里使用部署 kubectl 命令行工具时创建的 admin 证书、私钥和上面的 ca 证书,创建一个浏览器可以使用 pkcs#12/pfx 格式的证书:
[root@k8s-master01 ~]# cd /opt/k8s/work/
[root@k8s-master01 work]# openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem
enter export password: # 这里输入自己设定的任意密码,比如"123456"
verifying - enter export password: # 确认密码: 123456 [root@k8s-master01 work]# ll admin.pfx
-rw-r--r-- 1 root root 3613 jun 23 23:56 admin.pfx
将在k8s-master01服务器上生成的client证书admin.pfx拷贝到mac本机,导入到"钥匙串访问" -> "系统" -> "证书" 里面 (导入时会提示输入admin.pfx证书的密码,即"123456"),如下图:
清除浏览器历史记录,一定要重启浏览器,接着访问apiserver地址,接着会提示选择一个浏览器证书,这里选中上面导入的"admin.pfx", 然后再次访问apiserver,发现相应的metrics数据就成功显示出来了!!(注意,如果失败了。则可以删除证书,然后重新生成,重新导入再跟着操作步骤来一遍,清除浏览器缓存,重启浏览器,选择导入的证书,再次访问即可!)
同样的,再上面apiserver访问的client证书导入到本地浏览器后,再访问kubelet的10250端口的metric时,也会提示选择导入的证书"admin.pfx",然后就会正常显示对应的metrics数据了。(k8s集群的其他组件metrics的https证书方式方式同理!)
9.4 - 部署 kube-proxy 组件
kube-proxy运行在所有的node节点上,它监听apiserver中service和endpoint的变化情况,创建路由规则以提供服务ip和负载均衡功能。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。
1)下载和分发 kube-proxy 二进制文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
scp kubernetes/server/bin/kube-proxy root@${node_node_ip}:/opt/k8s/bin/
ssh root@${node_node_ip} "chmod x /opt/k8s/bin/*"
done 2) 创建 kube-proxy 证书
创建证书签名请求:
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-proxy-csr.json <{
"cn": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"c": "cn",
"st": "beijing",
"l": "beijing",
"o": "k8s",
"ou": "4paradigm"
}
]
}
eof 注意:
cn:指定该证书的 user 为 system:kube-proxy;
预定义的 rolebinding system:node-proxier 将user system:kube-proxy 与 role system:node-proxier 绑定,该 role 授予了调用 kube-apiserver proxy 相关 api 的权限;
该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空; 生成证书和私钥:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@k8s-master01 work]# ll kube-proxy*
-rw-r--r-- 1 root root 1013 jun 24 20:21 kube-proxy.csr
-rw-r--r-- 1 root root 218 jun 24 20:21 kube-proxy-csr.json
-rw------- 1 root root 1679 jun 24 20:21 kube-proxy-key.pem
-rw-r--r-- 1 root root 1411 jun 24 20:21 kube-proxy.pem 3)创建和分发 kubeconfig 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${kube_apiserver} \
--kubeconfig=kube-proxy.kubeconfig [root@k8s-master01 work]# kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig [root@k8s-master01 work]# kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig [root@k8s-master01 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 注意:--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径); 分发 kubeconfig 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo ">>> ${node_node_name}"
scp kube-proxy.kubeconfig root@${node_node_name}:/etc/kubernetes/
done 4)创建 kube-proxy 配置文件
从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件。
创建 kube-proxy config 文件模板:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > kube-proxy-config.yaml.template <kind: kubeproxyconfiguration
apiversion: kubeproxy.config.k8s.io/v1alpha1
clientconnection:
burst: 200
kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
qps: 100
bindaddress: ##node_node_ip##
healthzbindaddress: ##node_node_ip##:10256
metricsbindaddress: ##node_node_ip##:10249
enableprofiling: true
clustercidr: ${cluster_cidr}
hostnameoverride: ##node_node_name##
mode: "ipvs"
portrange: ""
kubeproxyiptablesconfiguration:
masqueradeall: false
kubeproxyipvsconfiguration:
scheduler: rr
excludecidrs: []
eof 注意:
bindaddress: 监听地址;
clientconnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
clustercidr: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 service ip 的请求做 snat;
hostnameoverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 node,从而不会创建任何 ipvs 规则;
mode: 使用 ipvs 模式; 为各节点创建和分发 kube-proxy 配置文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for (( i=0; i < 3; i ))
do
echo ">>> ${node_node_names[i]}"
sed -e "s/##node_node_name##/${node_node_names[i]}/" -e "s/##node_node_ip##/${node_node_ips[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${node_node_names[i]}.yaml.template
scp kube-proxy-config-${node_node_names[i]}.yaml.template root@${node_node_names[i]}:/etc/kubernetes/kube-proxy-config.yaml
done [root@k8s-master01 work]# ll kube-proxy-config-k8s-node0*
-rw-r--r-- 1 root root 500 jun 24 20:27 kube-proxy-config-k8s-node01.yaml.template
-rw-r--r-- 1 root root 500 jun 24 20:27 kube-proxy-config-k8s-node02.yaml.template
-rw-r--r-- 1 root root 500 jun 24 20:27 kube-proxy-config-k8s-node03.yaml.template 5)创建和分发 kube-proxy systemd unit 文件
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# cat > kube-proxy.service <[unit]
description=kubernetes kube-proxy server
documentation=https://github.com/googlecloudplatform/kubernetes
after=network.target [service]
workingdirectory=${k8s_dir}/kube-proxy
execstart=/opt/k8s/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy-config.yaml \\
--logtostderr=true \\
--v=2
restart=on-failure
restartsec=5
limitnofile=65536 [install]
wantedby=multi-user.target
eof 分发 kube-proxy systemd unit 文件:
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_name in ${node_node_names[@]}
do
echo ">>> ${node_node_name}"
scp kube-proxy.service root@${node_node_name}:/etc/systemd/system/
done 6)启动 kube-proxy 服务
[root@k8s-master01 work]# cd /opt/k8s/work
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "mkdir -p ${k8s_dir}/kube-proxy"
ssh root@${node_node_ip} "modprobe ip_vs_rr"
ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
done 注意:启动服务前必须先创建工作目录; 检查启动结果:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "systemctl status kube-proxy|grep active"
done 预期结果:
>>> 172.16.60.244
active: active (running) since mon 2019-06-24 20:35:31 cst; 2min 0s ago
>>> 172.16.60.245
active: active (running) since mon 2019-06-24 20:35:30 cst; 2min 0s ago
>>> 172.16.60.246
active: active (running) since mon 2019-06-24 20:35:32 cst; 1min 59s ago 确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-proxy) 7)查看监听端口(在任意一台node节点上查看)
[root@k8s-node01 ~]# netstat -lnpt|grep kube-prox
tcp 0 0 172.16.60.244:10249 0.0.0.0:* listen 3830/kube-proxy
tcp 0 0 172.16.60.244:10256 0.0.0.0:* listen 3830/kube-proxy 需要注意:
10249:该端口用于http prometheus metrics port;
10256:该端口用于http healthz port; 8)查看 ipvs 路由规则
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh root@${node_node_ip} "/usr/sbin/ipvsadm -ln"
done 预期输出:
>>> 172.16.60.244
ip virtual server version 1.2.1 (size=4096)
prot localaddress:port scheduler flags
-> remoteaddress:port forward weight activeconn inactconn
tcp 10.254.0.1:443 rr
-> 172.16.60.241:6443 masq 1 0 0
-> 172.16.60.242:6443 masq 1 0 0
-> 172.16.60.243:6443 masq 1 0 0
>>> 172.16.60.245
ip virtual server version 1.2.1 (size=4096)
prot localaddress:port scheduler flags
-> remoteaddress:port forward weight activeconn inactconn
tcp 10.254.0.1:443 rr
-> 172.16.60.241:6443 masq 1 0 0
-> 172.16.60.242:6443 masq 1 0 0
-> 172.16.60.243:6443 masq 1 0 0
>>> 172.16.60.246
ip virtual server version 1.2.1 (size=4096)
prot localaddress:port scheduler flags
-> remoteaddress:port forward weight activeconn inactconn
tcp 10.254.0.1:443 rr
-> 172.16.60.241:6443 masq 1 0 0
-> 172.16.60.242:6443 masq 1 0 0
-> 172.16.60.243:6443 masq 1 0 0 由上面可以看出:所有通过 https 访问 k8s svc kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口;
十、验证kubernetes集群功能
使用 daemonset 验证 master 和 worker 节点是否工作正常。 1)检查节点状态
[root@k8s-master01 ~]# kubectl get nodes
name status roles age version
k8s-node01 ready6d3h v1.14.2
k8s-node02 ready6d3h v1.14.2
k8s-node03 ready6d3h v1.14.2 各node节点状态都为 ready 时正常。 2)创建测试文件
[root@k8s-master01 ~]# cd /opt/k8s/work
[root@k8s-master01 work]# cat > nginx-ds.yml <apiversion: v1
kind: service
metadata:
name: nginx-ds
labels:
app: nginx-ds
spec:
type: nodeport
selector:
app: nginx-ds
ports:
- name: http
port: 80
targetport: 80
---
apiversion: extensions/v1beta1
kind: daemonset
metadata:
name: nginx-ds
labels:
addonmanager.kubernetes.io/mode: reconcile
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerport: 80
eof 执行测试
[root@k8s-master01 work]# kubectl create -f nginx-ds.yml 3)检查各节点的 pod ip 连通性
稍微等一会儿,或者或刷几次下面的命令,才会显示出pod的ip信息
[root@k8s-master01 work]# kubectl get pods -o wide|grep nginx-ds
nginx-ds-4lf8z 1/1 running 0 46s 172.30.56.2 k8s-node02
nginx-ds-6kfsw 1/1 running 0 46s 172.30.72.2 k8s-node03
nginx-ds-xqdgw 1/1 running 0 46s 172.30.88.2 k8s-node01可见,nginx-ds的 pod ip分别是 172.30.56.2、172.30.72.2、172.30.88.2,在所有 node 上分别 ping 这三个 ip,看是否连通:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "ping -c 1 172.30.56.2"
ssh ${node_node_ip} "ping -c 1 172.30.72.2"
ssh ${node_node_ip} "ping -c 1 172.30.88.2"
done 预期输出结果:
>>> 172.16.60.244
ping 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.542 ms --- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms
ping 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.654 ms --- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms
ping 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=64 time=0.103 ms --- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms
>>> 172.16.60.245
ping 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=64 time=0.106 ms --- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms
ping 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.408 ms --- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms
ping 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.345 ms --- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms
>>> 172.16.60.246
ping 172.30.56.2 (172.30.56.2) 56(84) bytes of data.
64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.350 ms --- 172.30.56.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms
ping 172.30.72.2 (172.30.72.2) 56(84) bytes of data.
64 bytes from 172.30.72.2: icmp_seq=1 ttl=64 time=0.105 ms --- 172.30.72.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms
ping 172.30.88.2 (172.30.88.2) 56(84) bytes of data.
64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.584 ms --- 172.30.88.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 4)检查服务 ip 和端口可达性
[root@k8s-master01 work]# kubectl get svc |grep nginx-ds
nginx-ds nodeport 10.254.41.8380:30876/tcp 4m24s 可见:
service cluster ip:10.254.41.83
服务端口:80
nodeport 端口:30876 在所有 node 上 curl service ip:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "curl -s 10.254.41.83"
done 预期输出: nginx欢迎页面内容。 5)检查服务的 nodeport 可达性
在所有 node 上执行:
[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh
[root@k8s-master01 work]# for node_node_ip in ${node_node_ips[@]}
do
echo ">>> ${node_node_ip}"
ssh ${node_node_ip} "curl -s ${node_node_ip}:30876"
done 预期输出: nginx 欢迎页面内容。