Kubernetes学习之路(27)之k8s 1.15.2 部署

By | 2021年7月6日
目录
[隐藏]

一、环境准备

IP地址 节点角色 CPU Memory Hostname Docker version
192.168.56.110 master >=2c >=2G k8s-master 19.03
192.168.56.120 node >=2c >=2G k8s-node01 19.03
192.168.56.130 node >=2c >=2G k8s-node02 19.03

所有节点以下操作:

1、设置各主机的主机名,管理节点为k8s-master

# hostnamectl set-hostname k8s-master
# hostnamectl set-hostname k8s-node01
# hostnamectl set-hostname k8s-node02

2、编辑/etc/hosts文件,添加域名解析

cat <<EOF >> /etc/hosts
192.168.56.110 k8s-master
192.168.56.120 k8s-node01
192.168.56.130 k8s-node02
EOF

3、关闭防火墙、selinux、swap

# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# swapoff -a
# sed -i 's/.*swap.*/#&/' /etc/fstab

4、配置内核参数,将桥接的ipv4流量进行转发到iptables

# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p

5、配置国内的YUM源

# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum clean all && yum makecache

6、配置国内Kubernetes源和docker源

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

二、软件安装

注:在所有节点上进行如下操作

1、安装docker

# yum list docker-ce.x86_64  --showduplicates |sort -r  #查看docker的版本
# yum install docker-ce	     #安装默认最新版本
# yum install docker-ce-18.09.8.ce-3.el7     #安装指定版本
# systemctl enable docker && systemctl start docker
# docker -version

2、安装kubeadm、kubelet、kubectl

# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet

修改cgroups,在末尾加上"--cgroup-driver=cgroupfs"
# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"

三、部署master节点

1、在master节点上进行Kubernetes集群初始化

定义pod的网段为:10.244.0.0/16,api-server为本机ip地址。由于国内无法访问国外的镜像,这里通过–image-repository来指定阿里云镜像仓库地址。


[root@k8s-master ~]# kubeadm init --kubernetes-version=1.15.2 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.110]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.014258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: klo2o3.77512ufwsjxzp9ws
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
    --discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e 

这里需要记录好其他节点加入Kubernetes集群的命令!

root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

2、配置kubectl工具

[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
#此处如果没有声明环境变量,是没有加载管理k8s集群的权限的,此时去查看集群,会提示拒绝了该请求。如下:The connection to the server localhost:8080 was refused - did you specify the right host or port?
#或者采用上面提示的方案:

[root@k8s-master ~]# mkdir -p /root/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config 

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   2m10s   v1.15.2

3、部署flannel网络

由于无法访问国外的镜像,而阿里云的仓库需要登录,这里找到另外一个站点进行下载镜像

# mkdir k8s && cd k8s
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
# docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
# kubectl apply -f kube-flannel.yml

# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-ghfrp              1/1     Running   0          129m
coredns-bccdc95cf-h4tch              1/1     Running   0          129m
etcd-k8s-master                      1/1     Running   0          128m
kube-apiserver-k8s-master            1/1     Running   0          128m
kube-controller-manager-k8s-master   1/1     Running   0          128m
kube-flannel-ds-amd64-r2hmf          1/1     Running   0          111m
kube-flannel-ds-amd64-zwt6l          1/1     Running   0          36m
kube-proxy-czjzf                     1/1     Running   0          129m
kube-proxy-ts4nf                     1/1     Running   0          36m
kube-scheduler-k8s-master            1/1     Running   0          128m

看到以上的pod都处于Running状态,集群状态即为正常运行,这里需要注意的是,由于master节点在集群初始化,是带有污点的,不允许pod进行调度到master节点之上,相关的信息如下:Taints: node-role.kubernetes.io/master:NoSchedule

四、部署node节点

在所有node节点上操作

这里需要注意的是node节点上也需要部署flannel、pause、kube-proxy的pod,所以需要预先进行下载镜像,其中需要的镜像分别为:k8s.gcr.io/kube-proxy-amd64:v1.15.2 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1

# kubeadm join 192.168.56.110:6443 --token klo2o3.77512ufwsjxzp9ws \
    --discovery-token-ca-cert-hash sha256:d8561c1deed76a67e6c665b3bbd9c59d076d6bcd93bc79291890aa49a5c7386e

五、集群状态检测

在master上操作

1、在master上进行检查集群状态,返回如下结果则正常。重点查看STATUS内容为Ready时,则说明集群状态正常。

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   17h   v1.15.2
k8s-node01   Ready    <none>   16h   v1.15.2
k8s-node02   Ready    <none>   11s   v1.15.2

2、创建Pod,验证集群

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-554b9c67f9-lw4jw   1/1     Running   0          2m54s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        139m
service/nginx        NodePort    10.110.217.32   <none>        80:30282/TCP   2m42s
[root@k8s-master ~]# curl http://192.168.56.110:30282/

发表评论

您的电子邮箱地址不会被公开。 必填项已用*标注