发布时间:2023-09-03 17:30
k8s yum部署及测试,后面有部分镜像和yml文件可私信后发出
10.0.0.11 k8s-master
10.0.0.12 k8s-node-1
10.0.0.13 k8s-node-2
yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS=“http://0.0.0.0:2379”
21行:ETCD_ADVERTISE_CLIENT_URLS=“http://10.0.0.11:2379”
systemctl start etcd.service
systemctl enable etcd.service
etcdctl set testdir/testkey0 0 设置一个值
etcdctl get testdir/testkey0 测试刚才设置的值
etcdctl set testdir/testkey4 4
etcdctl get testdir/testkey4
etcdctl -C http://10.0.0.11:2379 cluster-health
返回member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379
cluster is healthy
yum install kubernetes-master.x86_64 -y
vim /etc/kubernetes/apiserver
8行: KUBE_API_ADDRESS=“–insecure-bind-address=0.0.0.0”
11行:KUBE_API_PORT=“–port=8080”
14行: KUBELET_PORT=“–kubelet-port=10250”
17行:KUBE_ETCD_SERVERS=“–etcd-servers=http://10.0.0.11:2379”
23行:
KUBE_ADMISSION_CONTROL=“–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
vim /etc/kubernetes/config
22行:KUBE_MASTER=“–master=http://10.0.0.11:8080”
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
检查
[root@k8s-master ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”:“true”}
且浏览器可以访问http://10.0.0.11:8080/
yum install kubernetes-node.x86_64 -y (会自动安装docker,docker属于base源)
vim /etc/kubernetes/config proxy
22行:KUBE_MASTER=“–master=http://10.0.0.11:8080”
vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS=“–address=0.0.0.0”
8行:KUBELET_PORT=“–port=10250”
11行:KUBELET_HOSTNAME=“–hostname-override=10.0.0.12”
14行:KUBELET_API_SERVER=“–api-servers=http://10.0.0.11:8080”
systemctl enable kubelet.service
systemctl restart kubelet.service docker会自动启动,docker起不来,kubelet也会起不来
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service
Systemctl enable docker
在master节点检查
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
10.0.0.12 Ready 6m
10.0.0.13 Ready 3s
其中cadvidor会启动4194端口,在浏览器就可以看到节点信息所占用的资源
yum install flannel -y
sed -i ‘s#http://127.0.0.1:2379#http://10.0.0.11:2379#g’ /etc/sysconfig/flanneld 所有节点向etcd写数据
##master节点:
etcdctl mk /atomic.io/network/config ‘{ “Network”: “172.18.0.0/16” }’
Flanneld启动之前需要etcd,所以先要设置一个key
Etcdctl get /atomic.io/network/config
yum install docker -y
Systemctl start docker
Ip a
Docker网络为172.17.0.1
systemctl enable flanneld.service
systemctl restart flanneld.service 会启动一个flanneld网卡段,系统自动分配的
Ip a可以看到flannel为172.18.94.0网段/172.18.8.0/16 172.18.36.0/16 182.18.78.0/16
systemctl restart docker docker的网络会变成和flannele网络同一个网段
Ip a可以看到docker的网络变了172.19.94.1段,docker和同主机的flannel保持同一网段
systemctl enable docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
##node节点:
systemctl enable flanneld.service
systemctl restart flanneld.service 172.18.84.0/16
systemctl restart docker 此时docker的网络也是172.18.84.1/24
systemctl restart kubelet.service
systemctl restart kube-proxy.service
vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT 永久生效
systemctl daemon-reload
systemctl restart docker
测试flannele网络能不能跨宿主机容器之间的通讯(需要查看内核转发参数,并且查iptables)
Iptables -P FORWARD ACCEPT(临时调整,开机关机,重启docker会重置)
测试容器之间跨宿主机通讯
1、上传alpine包load成镜像, docker load -i /docker_alpine.tar.gz
2、docker run -it alpine:latest ip a查看ip然后在另一个容器ping
如果不通,检查内核转发规则,检查iptables规则,有默认规则是允许,装上docker后就拒绝了。vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
3、ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
#所有节点
vi /etc/docker/daemon.json
{
“registry-mirrors”: [“https://registry.docker-cn.com”],
“insecure-registries”: [“10.0.0.11:5000”]
}
systemctl restart docker
#master节点
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry registry
(上面命令会自己pull registry镜像)网络不好的情况还是自己上传镜像包,load -i进去
Docker ps
上传镜像至私有仓库测试
上传镜像load -i
docker tag docker.io/alpine:latest 10.0.0.11:5000/alpine:lastest 打包成私有仓库的格式
docker tag alpine:latest 10.0.0.11:5000/alpine:lastest docker images是啥样就是啥样
docker push 10.0.0.11:5000/alpine:lastest
如果上传镜像时报错received unexpected HTTP status: 500 Internal Server Error
查看selinux状,临时关闭setenforce 0后再次尝试上传镜像
然后在启动仓库容器的时候挂在的目录下面查看
/opt/myregistry/docker/registry/v2/repositories 或者
/opt/myregistry/docker/registry/v2/repositories
K8s使用场景:微服务
小程序不是微服务