目录
一、环境准备
1.服务器配置
2.环境配置—三个节点都操作
1)关闭防火墙
2)禁用selinux
3)设置主机名
4)域名解析
5)关闭交换分区
二、安装docker-ce——三个节点都操作
1.安装docker所需的依赖包
2.配置yum源
3.安装docker
4.启动docker并设置开机自启
三、安装kubernetes部署集群
1.配置kubernetes的yum源——三个节点都操作
2.安装kubectl、kubelet、kubeadm、ipvsadm——三个节点都操作
3.加载ipvs相关内核文件——三个节点都操作
4.配置转发相关参数——三个节点都操作
5.设置kubelet开机自启——三个节点都操作
四、master节点初始化
1.生成初始化文件init-config.yaml
2.修改初始化配置文件
3.拉取初始化所需镜像
4.开始初始化
1)应用配置文件
2)初始化后的操作
5、node节点加入集群——node节点操作
6.查看集群
五、配置网络插件flannel
1.配置文件kube-flannel.yaml
2.拉取镜像
3.应用配置文件
4.查看集群状态
一、环境准备
1.服务器配置
主机名+IP | |
---|---|
k8s-master:192.168.22.134 | docker、kubectl、kubelet、kubeadm、ipvsadm、kube-apiserver、kube-controller-manager、kube-scheduler、Etcd、kube-proxy、flannel |
k8s-node1:192.168.22.135 | docker、kubectl、kubelet、kubeadm、ipvsadm、kube-proxy、flannel |
k8s-node2:192.168.22.136 | docker、kubectl、kubelet、kubeadm、ipvsadm、kube-proxy、flannel |
2.环境配置—三个节点都操作
1)关闭防火墙
[root@k8s-master ~]# systemctl disable firewalld && systemctl stop firewalld
2)禁用selinux
# 设置selinux为permissive
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# vim /etc/sysconfig/selinux
#将selinux=enforcing改成selinux=disabled;此配置需重启服务器生效
3)设置主机名
# k8s-master
[root@k8s-master ~]# hostnamectl set-hostname k8s-master
# k8s-node1
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
# k8s-node2
[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2
4)域名解析
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.22.134 k8s-master
192.168.22.135 k8s-node1
192.168.22.136 k8s-node2
5)关闭交换分区
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
[root@k8s-master ~]# free -mtotal used free shared buff/cache available
Mem: 3931 1012 910 12 2008 2646
Swap: 0 0 0
二、安装docker-ce——三个节点都操作
1.安装docker所需的依赖包
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 git
2.配置yum源
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum makecache fast
[root@k8s-master ~]# yum repolist
3.安装docker
[root@k8s-master ~]# yum -y install docker-ce
4.启动docker并设置开机自启
[root@k8s-master ~]# systemctl enable docker && systemctl start docker
三、安装kubernetes部署集群
1.配置kubernetes的yum源——三个节点都操作
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.安装kubectl、kubelet、kubeadm、ipvsadm——三个节点都操作
[root@k8s-master ~]# yum install -y kubelet-1.19.1 kubeadm-1.19.1 kubectl-1.19.1 ipvsadm
3.加载ipvs相关内核文件——三个节点都操作
[root@k8s-master ~]# modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh && modprobe nf_conntrack_ipv4 && modprobe br_netfilter
#如果重新开机,可将其写入配置文件中
[root@k8s-node2 ~]# vim /etc/rc.local
[root@k8s-master ~]# cat /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.touch /var/lock/subsys/local
modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh && modprobe nf_conntrack_ipv4 && modprobe br_netfilter
[root@k8s-node2 ~]# chmod +x /etc/rc.local
4.配置转发相关参数——三个节点都操作
[root@k8s-master ~]# cat >> /etc/sysctl.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness = 0
> EOF
#使配置文件生效
[root@k8s-master ~]# sysctl --system
5.设置kubelet开机自启——三个节点都操作
[root@k8s-master ~]# systemctl enable kubelet
四、master节点初始化
1.生成初始化文件init-config.yaml
[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml
2.修改初始化配置文件
[root@k8s-master ~]# vim init-config.yaml
[root@k8s-master ~]# cat init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.22.134 #此处为master的IP地址
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master #此处为master主机名或IP地址
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #此处修改为国内的源
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16 # pod的资源网段
scheduler: {}
3.拉取初始化所需镜像
[root@k8s-master ~]# kubeadm config images pull --config=init-config.yaml
W0804 23:13:15.366467 47625 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
4.开始初始化
1)应用配置文件
[root@k8s-master ~]# kubeadm init --config=init-config.yaml
W0804 23:15:17.173025 49158 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.22.134]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.22.134 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.22.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.518466 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.22.134:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:27f933b3f95c0b846e42222cf902c44d81918e1891109217e9d0e70bae14b5ba
2)初始化后的操作
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
5、node节点加入集群——node节点操作
# node1
[root@k8s-node1 ~]# kubeadm join 192.168.22.134:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:27f933b3f95c0b846e42222cf902c44d81918e1891109217e9d0e70bae14b5ba
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.# node2
[root@k8s-node2 ~]# kubeadm join 192.168.22.134:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:27f933b3f95c0b846e42222cf902c44d81918e1891109217e9d0e70bae14b5ba
[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
6.查看集群
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m18s v1.19.1
k8s-node1 NotReady <none> 57s v1.19.1
k8s-node2 NotReady <none> 53s v1.19.1
五、配置网络插件flannel
1.配置文件kube-flannel.yaml
[root@k8s-master ~]# cat k8s-flannel/kube-flannel.yaml
---
kind: Namespace
apiVersion: v1
metadata:name: kube-systemlabels:k8s-app: flannelpod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodek8s-app: flannelapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannelk8s-app: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoSchedule- key: node.kubernetes.io/not-readyoperator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: docker.io/flannel/flannel-cni-plugin:v1.1.2#image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: docker.io/flannel/flannel:v0.21.5#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: docker.io/flannel/flannel:v0.21.5#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgr- --iface=ens33resources:requests:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate
[root@k8s-master ~]#
2.拉取镜像——三个节点都操作
# 我是用的镜像的压缩包上传到docker中的
[root@k8s-master k8s-flannel]# docker load -i flannel.tar
f1417ff83b31: Loading layer 7.338MB/7.338MB
0b4e551d392c: Loading layer 7.374MB/7.374MB
784170efc633: Loading layer 12.91MB/12.91MB
519c630428ca: Loading layer 2.809MB/2.809MB
e71295e69b53: Loading layer 39.44MB/39.44MB
5594351037b7: Loading layer 5.632kB/5.632kB
3d7bd79bcdea: Loading layer 9.728kB/9.728kB
fa8e863f60fa: Loading layer 8.704kB/8.704kB
Loaded image: flannel/flannel:v0.21.5
[root@k8s-master k8s-flannel]# docker load -i flannel-cni-plugin-v1.1.2.tar
7df5bd7bd262: Loading layer 5.904MB/5.904MB
64536537b1ac: Loading layer 2.344MB/2.344MB
Loaded image: flannel/flannel-cni-plugin:v1.1.2
[root@k8s-master k8s-flannel]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
flannel/flannel v0.21.5 a6c0cb5dbd21 15 months ago 68.9MB
flannel/flannel-cni-plugin v1.1.2 7a2dcab94698 20 months ago 7.97MB
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 3 years ago 253MB
registry.aliyuncs.com/google_containers/kube-proxy v1.19.0 bc9c328f379c 3 years ago 118MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.19.0 09d665d529d0 3 years ago 111MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.19.0 1b74e93ece2f 3 years ago 119MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.19.0 cbdc8369d8b1 3 years ago 45.7MB
registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 4 years ago 45.2MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 4 years ago 683kB
3.应用配置文件
[root@k8s-master k8s-flannel]# kubectl apply -f kube-flannel.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kube-system configured
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
4.查看集群状态
[root@k8s-master k8s-flannel]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 16m v1.19.1
k8s-node1 Ready <none> 15m v1.19.1
k8s-node2 Ready <none> 15m v1.19.1
[root@k8s-master k8s-flannel]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d56c8448f-9xz5p 1/1 Running 0 16m 10.244.2.2 k8s-node2 <none> <none>
coredns-6d56c8448f-sng7j 1/1 Running 0 16m 10.244.1.2 k8s-node1 <none> <none>
etcd-k8s-master 1/1 Running 0 17m 192.168.22.134 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 17m 192.168.22.134 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 17m 192.168.22.134 k8s-master <none> <none>
kube-flannel-ds-5pq29 1/1 Running 0 61s 192.168.22.136 k8s-node2 <none> <none>
kube-flannel-ds-n8hwl 1/1 Running 0 61s 192.168.22.134 k8s-master <none> <none>
kube-flannel-ds-rjcqm 1/1 Running 0 61s 192.168.22.135 k8s-node1 <none> <none>
kube-proxy-c6xgv 1/1 Running 0 15m 192.168.22.135 k8s-node1 <none> <none>
kube-proxy-lnjt9 1/1 Running 0 16m 192.168.22.134 k8s-master <none> <none>
kube-proxy-vvpl8 1/1 Running 0 15m 192.168.22.136 k8s-node2 <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 17m 192.168.22.134 k8s-master <none> <none>
[root@k8s-master k8s-flannel]#