发布时间:2024-10-13 09:01
[Kubernetes]PV,PVC,StorageClass实战----|
[Kubernetes]PV,PVC,StorageClass实战----||
1 对于PV或者StorageClass只能对应一种后端存储
2 对于手动的情况,一般我们会创建很多的PV,等有PVC需要使用的时候就可以直接使用了
3 对于自动的情况,那么就由StorageClass来自动管理创建
4 如果Pod想要使用共享存储,一般会创建PVC,PVC中描述了想要什么类型的后端存 储、空间等,K8s从而会匹配对应的PV,如果没有匹配成功,Pod就会处于Pending状态。Pod中使用只需要像使用volumes一样,指定名字就可以使用了
5 一个Pod可以使用多个PVC,一个PVC也可以给多个Pod使用
6 一个PVC只能绑定一个PV,一个PV只能对应一种后端存储
Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容:
1),PV的属性.比如,存储类型,Volume的大小等.
2),创建这种PV需要用到的存储插件
有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV. 但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可
在一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否则新的Pod就会因为PVC绑定不到PV而导致创建失败.而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求
而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,用户根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。
要使用 StorageClass,我们就得安装对应的自动配置程序,比如我们这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,我们也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动帮我们创建 PV。
1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中
2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上
原理及部署流程说明
详细的运作流程可以参考下图
1. 前面的例子中,我们提前创建了 PV,然后通过 PVC 申请 PV 并在 Pod 中使用,这种方式叫做静态供给(Static Provision)。
与之对应的是动态供给(Dynamical Provision),即如果没有满足 PVC 条件的 PV,会动态创建 PV。相比静态供给,动态供给有明显的优势:不需要提前创建 PV,减少了管理员的工作量,效率高。
动态供给是通过 StorageClass 实现的,StorageClass 定义了如何创建 PV。
StorageClass作为对存储资源的抽象定义,对用户设置的PVC申请屏蔽后端存储的细节,一方面减少了用户对存储资源细节的关注,另一方面减少了管理员手工管理PV的工作,由系统自动完成PV的创建和绑定,实现了动态的资源供应。
StorageClass的定义主要包括名称、后端存储的提供者(privisioner)和后端存储的相关参数配置parameters以及存储卷回收策略reclaimPolicy。StorageClass 对象的名称很重要,它是用户请求特定类的方式。StorageClass一旦被创建,就无法修改,如需修改,只能删除重建。
管理员只能为不请求绑定任何特定类的 PVC 指定默认 StorageClass:有关详细信息,请参阅
PersistentVolumeClaim 部分
2 StorageClass的参考yaml文件 分析:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
2.1 参数详解:
1)provisioner:供应商,每个 StorageClass 都有一个配置器,用于确定用于配置 PV 的卷插件。必须指定此字段。
Volume Plugin[卷插件] | Internal Provisioner[内部供应商] | Config Example[配置示例] |
---|---|---|
AWSElasticBlockStore | ✓ | AWS EBS |
AzureFile | ✓ | Azure File |
AzureDisk | ✓ | Azure Disk |
CephFS | - | - |
Cinder | ✓ | OpenStack Cinder |
FC | - | - |
FlexVolume | - | - |
Flocker | ✓ | - |
GCEPersistentDisk | ✓ | GCE PD |
Glusterfs | ✓ | Glusterfs |
iSCSI | - | - |
Quobyte | ✓ | Quobyte |
NFS | - | NFS |
RBD | ✓ | Ceph RBD |
VsphereVolume | ✓ | vSphere |
PortworxVolume | ✓ | Portworx Volume |
ScaleIO | ✓ | ScaleIO |
StorageOS | ✓ | StorageOS |
Local | - | Local |
您不限于指定此处列出的“内部”供应商(其名称以“kubernetes.io”为前缀并与 Kubernetes 一起提供)。 您还可以运行和指定外部供应商,它们是遵循Kubernetes 定义的规范的独立程序。外部供应商的作者可以完全决定他们的代码所在的位置、供应商的交付方式、需要如何运行、它使用的卷插件(包括 Flex)等。存储库 kubernetes-sigs/sig-storage-lib- external-provisioner 包含一个库,用于编写实现大部分规范的外部供应商。一些外部供应商列在存储库 kubernetes-sigs/sig-storage-lib-external-provisioner下。
例如,NFS 不提供内部配置器,但可以使用外部配置器。在某些情况下,第 3 方存储供应商会提供自己的外部配置器。
2)reclaimPolicy:回收策略。由 StorageClass 动态创建的 PersistentVolume 将具有在reclaimPolicy
类的字段中指定的回收策略,可以是Delete
或Retain
。如果reclaimPolicy
在创建 StorageClass 对象时指定 no,则默认为Delete
.
手动创建并通过 StorageClass 管理的 PersistentVolume 将具有在创建时分配的任何回收策略
3)allowVolumeExpansion:允许卷扩展。
特征状态: Kubernetes v1.11 [beta]
PersistentVolume 可以配置为可扩展。此功能设置为 时true
,允许用户通过编辑相应的 PVC 对象来调整卷的大小。
当底层 StorageClass 将该字段allowVolumeExpansion
设置为 true 时,以下类型的卷支持卷扩展。
Volume type | Required Kubernetes version |
---|---|
gcePersistentDisk | 1.11 |
awsElasticBlockStore | 1.11 |
Cinder | 1.11 |
glusterfs | 1.11 |
rbd | 1.11 |
Azure File | 1.11 |
Azure Disk | 1.11 |
Portworx | 1.11 |
FlexVolume | 1.13 |
CSI | 1.14 (alpha), 1.16 (beta) |
注意:您只能使用卷扩展功能来增加卷,不能缩小它。
4)mountOptions:挂在参数。由 StorageClass 动态创建的 PersistentVolume 将在mountOptions
类的字段中指定挂载选项。
如果卷插件不支持挂载选项但指定了挂载选项,则配置将失败。挂载选项未在类或 PV 上验证。如果挂载选项无效,则 PV 挂载失败
5)volumeBindingMode:卷绑定模式。
该volumeBindingMode
字段控制何时应该发生卷绑定和动态配置。未设置时,默认使用“立即”模式。
该Immediate
模式指示一旦创建 PersistentVolumeClaim,就会发生卷绑定和动态供应。对于拓扑受限且无法从集群中的所有节点全局访问的存储后端,将在不知道 Pod 调度要求的情况下绑定或配置 PersistentVolume。这可能会导致不可调度的 Pod。
集群管理员可以通过指定模式来解决此问题,该WaitForFirstConsumer
模式将延迟 PersistentVolume 的绑定和配置,直到创建使用 PersistentVolumeClaim 的 Pod。PersistentVolume 将根据 Pod 的调度约束指定的拓扑进行选择或配置。这些包括但不限于资源需求、 节点选择器、 pod 亲和性和反亲和性以及污点和容忍度。
以下插件支持WaitForFirstConsumer
动态配置:
以下插件支持WaitForFirstConsumer
预先创建的 PersistentVolume 绑定:
Kubernetes v1.17 [stable]
动态配置和预先创建的 PV 也支持CSI 卷,但您需要查看特定 CSI 驱动程序的文档以查看其支持的拓扑键和示例。
笔记:
如果您选择使用
WaitForFirstConsumer
,请不要nodeName
在 Pod 规范中使用来指定节点亲和性。如果nodeName
在这种情况下使用,调度程序将被绕过,PVC 将保持在pending
状态。相反,在这种情况下,您可以使用节点选择器[node selector]作为主机名,如下所示。
nodeSelector:
kubernetes.io/hostname: kube-01
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
kubernetes.io/hostname: kube-01
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
当集群操作员指定WaitForFirstConsumer
卷绑定模式时,在大多数情况下不再需要将配置限制为特定拓扑。但是,如果仍然需要,allowedTopologies
可以指定。
此示例演示如何将已配置卷的拓扑限制到特定区域,并且应该用作受支持插件的zone
和zones
参数的替代。
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- us-central1-a
- us-central1-b
6)parameters:参数。
存储类具有描述属于该存储类的卷的参数。可以接受不同的参数,具体取决于provisioner
. 例如,io1
参数的值type
和 参数 iopsPerGB
是特定于 EBS 的。当省略参数时,使用一些默认值。
一个 StorageClass 最多可以定义 512 个参数。参数对象的总长度(包括其键和值)不能超过 256 KiB。
二:操作案例相关
搭建StorageClass+NFS,大致有以下几个步骤:
1).创建一个可用的NFS Serve 2).创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限 3).创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 4).创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联
1)nfs服务在[Kubernetes]PV,PVC,StorageClass实战----||中已经操作过
注意:/data/vlumes 172.16.10.5/24(rw,sync,no_subtree_check,no_root_squash)
no_root_squash # 以root身份来执行文件的操作,这样数据目录中的文件就可以被挂载对象使用权限更改的命令,笔者因权限问题,k8s一直报错 already present on machine ;通过pod日志查看才知道在容器中执行chown命令而出错。 # 建议将数据目录的权限调到最大来避免因权限引发对k8s内部的错误。 # 下面的图片中容器通过chown命令更改了属组,而引发了一系列的报错。起先就是在nfs上面没有权限的原因。
文件github
2) rbac.yaml 集群角色,普通角色,sa用户
[root@localhost nfs]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #根据实际环境设定namespace,下面类同
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #根据实际环境设定namespace,下面类同
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #根据实际环境设定namespace,下面类同
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #根据实际环境设定namespace,下面类同
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
3) StorageClass
[root@localhost nfs]# cat nfs-StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
archiveOnDelete: "false"
4) provisioner 提供者,外部提供者
[root@localhost nfs]# cat nfs-provisioner-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default #与RBAC文件中的namespace保持一致
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
#image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
# 与nfs-StorageClass.yaml中的provisioner要一致
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.16.10.5 #<=====NFS服务器的地址
- name: NFS_PATH
value: /opt/vlumes # <=====NFS服务器的路径
volumes:
- name: nfs-client-root
nfs:
server: 172.16.10.5 # <=====NFS服务器的地址
path: /opt/vlumes # <=====NFS服务器的路径
查看:
[root@localhost ~]# kubectl get pv,pvc,pods,sa,sc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-57bd9598b5-dmb26 1/1 Running 0 10m 10.42.1.107 172.16.10.21
NAME SECRETS AGE
serviceaccount/default 1 5d23h
serviceaccount/nfs-client-provisioner 1 10m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 8m46s
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
server:172.16.10.5
path: /opt/vlumes
readOnly: "false"
server
: Server 是 NFS 服务器的主机名或 IP 地址。path
: NFS 服务器导出的路径。readOnly
: 一个标志,指示存储是否将被安装为只读(默认为 false)。Kubernetes 不包含内部 NFS 供应商。您需要使用外部配置器为 NFS 创建 StorageClass。这里有些例子:
创建之前,是空的
[root@localhost pv]# ls /opt/vlumes/
pvc:
[root@localhost nfs]# cat test-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage #与nfs-StorageClass.yaml metadata.name保持一致
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
查看已经有pvc且pv也自动创建了也绑定【Bound】了
[root@localhost pv]# kubectl get pv,pvc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-b88214f7-f646-41e4-bfbb-7805edf2436c 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 79s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/test-claim Bound pvc-b88214f7-f646-41e4-bfbb-7805edf2436c 1Mi RWX managed-nfs-storage 79s Filesystem
在查看格式
自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中
namespace:default
pvcName:test-claim-pvc
pvName:36a91a70-22c2-448d-b73b-c233a0baa51a
[root@localhost nfs]# ls /opt/vlumes/default-test-claim-pvc-36a91a70-22c2-448d-b73b-c233a0baa51a/
PV 名称是随机字符串,所以每次只要不删除 PVC,那么 Kubernetes 中的与存储绑定将不会丢失,要是删除 PVC 也就意味着删除了绑定的文件夹,下次就算重新创建相同名称的 PVC,生成的文件夹名称也不会一致,因为 PV 名是随机生成的字符串,而文件夹命名又跟 PV 有关, 所以删除 PVC 需谨慎。
pod:
[root@localhost nfs]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: docker.io/library/busybox:1.28.4
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
查看:
[root@localhost pv]# k3s kubectl describe pod/test-pod
Warning FailedMount 27s (x3 over 29s) kubelet MountVolume.SetUp failed for volume "kube-api-access-bkpvs" : object "default"/"kube-root-ca.crt" not registered
获取sa,通过sa获取secrets
[root@localhost roles]# kubectl get sa
NAME SECRETS AGE
default 1 6d3h
nfs-client-provisioner 1 5m8s
[root@localhost roles]# kubectl get sa nfs-client-provisioner -o json
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2022-01-25T06:58:02Z",
"name": "nfs-client-provisioner",
"namespace": "default",
"resourceVersion": "505427",
"uid": "20d00b93-772f-49d3-a9b4-f3656f76f535"
},
"secrets": [
{
"name": "nfs-client-provisioner-token-d2lq7"
}
]
}
获取secrets的ca证书
[root@localhost roles]# kubectl get secrets nfs-client-provisioner-token-d2lq7
NAME TYPE DATA AGE
nfs-client-provisioner-token-d2lq7 kubernetes.io/service-account-token 3 5m31s
[root@localhost roles]# kubectl get secrets nfs-client-provisioner-token-d2lq7 -o json
{
"apiVersion": "v1",
"data": {
"ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTkRJMU5qTXpOemt3SGhjTk1qSXdNVEU1TURNek5qRTVXaGNOTXpJd01URTNNRE16TmpFNQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTkRJMU5qTXpOemt3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSVnVXc3ZmY0Rxckh2RFV2U0ZoMEdocUFrMkFWaGJLaTRBdkoyVTRhZXQKNWsxeG5zQTBSbHN3Z3JPcTl3Q0podW0wRnVrQ2l5RUVNQVhsaU5ROVJkUGhvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXFkcEM4TXlkK0l4MDlueGRIODZJCldTT3FkN013Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnR29pdWxCT1FwWXpRQnkxdUtxRmFFdXgrckk3MEltQncKVW92bWc4cnBJZ0FDSVFEMHBuMCsvV01NUnc5M1NGMDh6QVZUL3NUcC9vOVBxOWNWZDlhSlA2aXoyQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"namespace": "ZGVmYXVsdA==",
"token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltSmZablpSVUU1MVVsQnBWWGhVTnpCUVFURkRhMjVxWlhWUWRVTnNaa05GYlMxU09GWmxNRkl4UmtFaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUprWldaaGRXeDBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVtY3kxamJHbGxiblF0Y0hKdmRtbHphVzl1WlhJdGRHOXJaVzR0WkRKc2NUY2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzV1WVcxbElqb2libVp6TFdOc2FXVnVkQzF3Y205MmFYTnBiMjVsY2lJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpJd1pEQXdZamt6TFRjM01tWXRORGxrTXkxaE9XSTBMV1l6TmpVMlpqYzJaalV6TlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcGtaV1poZFd4ME9tNW1jeTFqYkdsbGJuUXRjSEp2ZG1semFXOXVaWElpZlEubmFjR1ZESW1OMFpWcmM1OTgtdjVBUFlYNUVqM2JfMXNGd0VzYjRidEQ0MTlpWi1mcEhLZ3hRNE9pcE5yTXptQVVVRHZJUEtSbnc4YmJEUWdvQWNWU1AyakxEc0RLUl9BX2NibTM5V292RC1DYnZfU2VlbFBPNUs1SnRSbm5WaXN1UWlwTDFZTENxSEZnb3VQRHdMb0t6OEVPYU5DMEpiTG5vOGwwYlJvUWpoazUtOUpfYWxnR1gxeEhOcFlJbFc2SUh5aUttNWU0c1dJNnVnTUo5N0tNSnJlMEVhX0VjVEIzV0NoYl9LdlFoeG5ocGdHeGRSa2ExTU1iRlpqVS01ekxES3B3MjFMWUFaeDhJdFJHNEtsekNNNHdHbGVWU3dHUERrMU9OS0VyZUJRUHVMY3U3SXpNbnhyczdQZmFwWmZvUkhUazJPYTlqcjc4THlXdlJ3QzlB"
},
"kind": "Secret",
"metadata": {
"annotations": {
"kubernetes.io/service-account.name": "nfs-client-provisioner",
"kubernetes.io/service-account.uid": "20d00b93-772f-49d3-a9b4-f3656f76f535"
},
"creationTimestamp": "2022-01-25T06:58:02Z",
"name": "nfs-client-provisioner-token-d2lq7",
"namespace": "default",
"resourceVersion": "505425",
"uid": "667fe7ea-6025-407a-934f-4c8983c5ca00"
},
"type": "kubernetes.io/service-account-token"
}
查看
[root@localhost nfs]# kubectl get pv,pvc,pods,sc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 28m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/test-claim Bound pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX managed-nfs-storage 28m Filesystem
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-848b88ddd4-w9rv9 1/1 Running 0 28m 10.42.1.114 172.16.10.21
pod/test-pod 0/1 Completed 0 28m 10.42.1.115 172.16.10.21
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/longhorn driver.longhorn.io Delete Immediate true 5d21h
storageclass.storage.k8s.io/managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 28m
[root@localhost nfs]# cat /opt/vlumes/default-test-claim-pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922/SUCCESS
已经在ls /opt/vlumes/default-test-claim-pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922/SUCCESS 存在了
自己的pvc和pod
pvc: 此时就不用先创建pv了,也不用家标签选择pv了
[root@localhost nfs]# cat pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
#namespace: wubo
spec:
accessModes: #访问模式
#- ReadWriteOnce
#- ReadWriteOncePod
- ReadWriteMany
#- ReadOnlyMany
resources: #申请资源,8Gi存储空间
requests:
storage: 1Gi
storageClassName: managed-nfs-storage
#selector:
# matchLabels:
# name: "wubo-pv1"
#matchExpressions:
# - {key: environment, operator: In, values: [dev]}
pod:沿用上一篇文章的
[root@localhost nfs]# cat nginx.yaml
apiVersion: v1
kind: Service
metadata:
labels: {name: nginx}
name: nginx
#namespace: wubo
spec:
ports:
- {name: t9080, nodePort: 30002, port: 80, protocol: TCP, targetPort: 80}
selector: {name: nginx}
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
#namespace: wubo
labels: {name: nginx}
spec:
replicas: 1
selector:
matchLabels: {name: nginx}
template:
metadata:
name: nginx
labels: {name: nginx}
spec:
containers:
- name: nginx
#image: harbor.jettech.com/jettechtools/nginx:1.21.4
#image: 172.16.10.5:5000/library/nginx:1.21.4
image: docker.io/library/nginx:1.21.4
volumeMounts:
- name: volv
mountPath: /data
volumes:
- name: volv
persistentVolumeClaim:
claimName: pvc1
说明应用层pod和pv之间解耦了,不需要关心pv,只需要找pvc就可以了
查看:
[root@localhost nfs]# kubectl get pv,pvc,pods,sc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 34m Filesystem
persistentvolume/pvc-ae7207c9-034f-4af2-8d03-568540bdadda 1Gi RWX Delete Bound default/pvc1 managed-nfs-storage 3m44s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/test-claim Bound pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX managed-nfs-storage 34m Filesystem
persistentvolumeclaim/pvc1 Bound pvc-ae7207c9-034f-4af2-8d03-568540bdadda 1Gi RWX managed-nfs-storage 3m44s Filesystem
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-848b88ddd4-w9rv9 1/1 Running 0 35m 10.42.1.114 172.16.10.21
pod/nginx-5cc4bd9557-zkszk 1/1 Running 0 2m22s 10.42.2.97 172.16.10.15
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/longhorn driver.longhorn.io Delete Immediate true 5d21h
storageclass.storage.k8s.io/managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 35m
测试文件:
进入容器
[root@localhost nfs]# kubectl exec -it pod/nginx-5cc4bd9557-zkszk sh
输入数据
# cd /data
# ls
# mkdir wubo
# cd wubo
# echo aaa > a.txt
# exit
查看一下挂载情况,其实和之前的静态类似,只不过这里是自动生成被挂载目录
# mount
172.16.10.5:/opt/vlumes/default-pvc1-pvc-ae7207c9-034f-4af2-8d03-568540bdadda on /data type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=172.16.10.15,local_lock=none,addr=172.16.10.5)
物理机查看信息:发现数据存在
[root@localhost nfs]# cat /opt/vlumes/default-pvc1-pvc-ae7207c9-034f-4af2-8d03-568540bdadda/wubo/a.txt
aaa
删除pod和pvc在此看:
[root@localhost nfs]# kubectl delete -f nginx.yaml
[root@localhost nfs]# kubectl delete -f pvc1.yaml
在此查看:发现pvc删除后pv也跟着删除了,那么物理机上面的信息也会删除
[root@localhost nfs]# kubectl get pv,pvc,pods,sc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 39m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/test-claim Bound pvc-5b33f6b3-a52b-450e-ba7f-422a24a93922 1Mi RWX managed-nfs-storage 39m Filesystem
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-848b88ddd4-w9rv9 1/1 Running 0 40m 10.42.1.114 172.16.10.21
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/longhorn driver.longhorn.io Delete Immediate true 5d21h
storageclass.storage.k8s.io/managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 39m
物理机信息: 删除了
[root@localhost nfs]# cat /opt/vlumes/default-pvc1-pvc-ae7207c9-034f-4af2-8d03-568540bdadda/wubo/a.txt
cat: /opt/vlumes/default-pvc1-pvc-ae7207c9-034f-4af2-8d03-568540bdadda/wubo/a.txt: No such file or directory
其实没有删除换了个目录而已
[root@localhost nfs]# cat /opt/vlumes/archived-pvc-ae7207c9-034f-4af2-8d03-568540bdadda/wubo/a.txt
aaa
而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在 NFS 服务器上
因为reclaimPolicy: Delete选择的这个,可以改成Retain 自行测试。
[root@localhost nfs]# cat nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "false" ## 是否设置为默认的storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
- hard ## 指定为硬挂载方式
- nfsvers=4 ## 指定NFS版本,这个需要根据NFS Server版本号设置
reclaimPolicy: Delete #Delete Retain
volumeBindingMode: Immediate #Immediate 立即模式 WaitForFirstConsumer 延迟模式
allowVolumeExpansion: true #增加该字段表示允许动态扩
如果报错:
报错:
"MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered"
[root@localhost tls]# pwd
/var/lib/rancher/k3s/server/tls
[root@localhost tls]# kubectl get configmap kube-root-ca.crt -o json
{
"apiVersion": "v1",
"data": {
"ca.crt": "-----BEGIN CERTIFICATE-----\nMIIBdzCCAR2gAwIBAgIBADAKBggqhkjOPQQDAjAjMSEwHwYDVQQDDBhrM3Mtc2Vy\ndmVyLWNhQDE2NDI1NjMzNzkwHhcNMjIwMTE5MDMzNjE5WhcNMzIwMTE3MDMzNjE5\nWjAjMSEwHwYDVQQDDBhrM3Mtc2VydmVyLWNhQDE2NDI1NjMzNzkwWTATBgcqhkjO\nPQIBBggqhkjOPQMBBwNCAARVuWsvfcDqrHvDUvSFh0GhqAk2AVhbKi4AvJ2U4aet\n5k1xnsA0RlswgrOq9wCJhum0FukCiyEEMAXliNQ9RdPho0IwQDAOBgNVHQ8BAf8E\nBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUqdpC8Myd+Ix09nxdH86I\nWSOqd7MwCgYIKoZIzj0EAwIDSAAwRQIgGoiulBOQpYzQBy1uKqFaEux+rI70ImBw\nUovmg8rpIgACIQD0pn0+/WMMRw93SF08zAVT/sTp/o9Pq9cVd9aJP6iz2A==\n-----END CERTIFICATE-----\n"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubernetes.io/description": "Contains a CA bundle that can be used to verify the kube-apiserver when using internal endpoints such as the internal service IP or kubernetes.default.svc. No other usage is guaranteed across distributions of Kubernetes clusters."
},
"creationTimestamp": "2022-01-19T03:36:37Z",
"name": "kube-root-ca.crt",
"namespace": "default",
"resourceVersion": "425",
"uid": "8f0f88c9-d23a-482e-8c50-115ad7c892ca"
}
}
[root@localhost tls]# grep -rin MIIBdzCCAR2gAwIBAgIBADAKBggqhkjOPQQDAjAjMSEwHwYDVQQDDBhrM3Mtc2Vy
server-ca.crt:2:MIIBdzCCAR2gAwIBAgIBADAKBggqhkjOPQQDAjAjMSEwHwYDVQQDDBhrM3Mtc2Vy
serving-kube-apiserver.crt:15:MIIBdzCCAR2gAwIBAgIBADAKBggqhkjOPQQDAjAjMSEwHwYDVQQDDBhrM3Mtc2Vy
我用的是k3s 证书路径/var/lib/rancher/k3s/server/tls
kube-root-ca.crt 与 server-ca.crt相同,因此当它说 kube-root-ca.crt 未注册时,我不确定证书链中可能还缺少什么
解决:
我了解 rootCAConfigMap 在默认服务帐户的每个命名空间中发布 kube-root-ca.crt。从 kubernetes 版本 1.22 开始,RootCAConfigMap 默认设置为 true,因此在创建 pod 时,将使用默认帐户的此证书。请查找有关基于绑定服务帐户令牌的预计数量的更多信息 - Managing Service Accounts | Kubernetes
运行k3s version v1.22.5+k3s,证书存在于 configmap 中
[root@localhost roles]# kubectl get configmap --all-namespaces | grep kube-root-ca.crt
default kube-root-ca.crt 1 6d3h
kube-system kube-root-ca.crt 1 6d3h
kube-public kube-root-ca.crt 1 6d3h
kube-node-lease kube-root-ca.crt 1 6d3h
longhorn-system kube-root-ca.crt 1 3d21h
wubo kube-root-ca.crt 1 24h
wuqi kube-root-ca.crt 1 24h
要停止使用默认服务帐户或已用于创建 pod 的特定服务帐户创建自动卷,只需在 serviceaccount 配置下将“automountServiceAccountToken”设置为“false”
1)在 1.6+ 版本中,您可以通过设置服务帐户来选择不为服务帐户自动挂载 API 凭据。在sa中停止使用:automountServiceAccountToken
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
2)在 1.6+ 版本中,您还可以选择不为特定 pod 自动挂载 API 凭证。pod中禁止使用:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
如果两者都指定了automountServiceAccountToken
值,则 pod 规范优先于服务帐户
kubernetes 1.11版本中开始支持pvc创建后的扩容
3.1 先查看storageclass是否配置了动态扩容,主要看storageclass是否存在allowVolumeExpansion字段
[root@localhost nfs]# kubectl get sc managed-nfs-storage -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
creationTimestamp: "2022-01-25T07:44:39Z"
name: managed-nfs-storage
resourceVersion: "510351"
uid: c8b55920-610e-4a0c-a09a-4152c1611373
mountOptions:
- hard
- nfsvers=4
parameters:
archiveOnDelete: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
3.2 allowVolumeExpansion: false 情况:
[root@localhost nfs]# cat nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "false" ## 是否设置为默认的storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
- hard ## 指定为硬挂载方式
- nfsvers=4 ## 指定NFS版本,这个需要根据NFS Server版本号设置
reclaimPolicy: Delete #Delete Retain
volumeBindingMode: Immediate #Immediate 立即模式 WaitForFirstConsumer 延迟模式
allowVolumeExpansion: false #增加该字段表示允许动态扩容 #true false
可以看到有allowVolumeExpansion。如果没有,说明是不支持动态扩容的,可以扩一下测试看看
[root@localhost nfs]# kubectl edit pvc pvc1
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
creationTimestamp: "2022-01-25T07:57:14Z"
finalizers:
- kubernetes.io/pvc-protection
name: pvc1
namespace: default
resourceVersion: "511734"
uid: 8d58b163-4669-40d9-8661-152c1d4045d2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: managed-nfs-storage
volumeMode: Filesystem
volumeName: pvc-8d58b163-4669-40d9-8661-152c1d4045d2
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
~
修改1Gi为2Gi,保存发现报错:error: persistentvolumeclaims "pvc1" could not be patched: persistentvolumeclaims "pvc1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
You can run `kubectl replace -f /tmp/kubectl-edit-3666233252.yaml` to try this update again.
说明不支持动态扩容
3.3 给storageclass添加allowVolumeExpansion字段
[root@localhost nfs]#
[root@localhost nfs]# cat nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "false" ## 是否设置为默认的storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
- hard ## 指定为硬挂载方式
- nfsvers=4 ## 指定NFS版本,这个需要根据NFS Server版本号设置
reclaimPolicy: Delete #Delete Retain
volumeBindingMode: Immediate #Immediate 立即模式 WaitForFirstConsumer 延迟模式
allowVolumeExpansion: true #增加该字段表示允许动态扩容 #true false
此时就可以,不需要重新部署pvc和pod应用,只修要修改storageclass即可
[root@localhost nfs]# kubectl edit pvc pvc1
persistentvolumeclaim/pvc1 edited
statefulset:
特性:
(1)稳定而且唯一的网络标识:pod的hostname模式为(statefulset名称)-(序号)
(2)稳定而且持久的存储:通过volumeclaimtemplate为每个pod创建一个PV。删除、减少副本,不会删除相关的卷
(3)有序、平滑地部署和扩展:比如redis主从集群应该先启动主节点,在启动从节点
(4)有序、平滑地删除和终止:比如我们要缩减redis规模,我们要先关闭从节点
(5)有序地滚动更新。一般要先更新从节点。
组成:
三个组件:headless service、volumeclaimtemplate、statefulset
(1)headless service:
1)在deployment中,与之对应的服务是service,而在statefulset中与之对应的headless service。
headless service,即无头服务,与service的不同就是它没有cluster ip【不是pod的ip是cluster ip没有实体支撑】,解析他的名称时将返回改headless service对应的全部pod的endpoint列表【就是pod的ip 有实体支撑】。
2)statefulset在headless service的基础上又为statefuleset控制的每个pod副本创建了一个DNS域名,这个域名的格式为:
$(podname).(headless server name)
FQDN(命令:hostname -f):$(podname).(headless server name).namespace.svc.cluster.local
3)为什么需要headless service无头服务?
在用deployment时,每一个pod名称是没有顺序地,是随机字符串,因此pod名称是无序的,但是在statefulset中要求必须是有序,没一个pod不能被随意取代,pod重建后pod名称还是一样的。而pod ip是变化的,所以是以pod名称来识别。pod名称是pod唯一性的标识符,必须持久稳定有效。这时候要用到无头服务,它可以给每个Pod一个唯一的名称。
(2)volumeclaimtemplates:存储卷申请模板,创建PVC,指定pvc名称大小,将自动创建pvc,且pvc必须由存储类供应。
1)为什么需要volumeclaimtemplate?
对于有状态的副本集都会用到持久卷,对于分布式系统来说,他的最大特点是数据是不一样的,所以各个节点不能使用同一存储卷,每个节点有自己的专用存储,但是如果在deployment中的Pod template里定义的存储卷,是所有副本集共用一个存储卷,数据是相同的,因为基于模板来的,而statefulset中每个pod都要自己的专用存储卷,所以statefulset的存储卷就不能再用pod模板来创建了,于是statefulset使用volumeclaimtemplate,称为卷申请模板,它会为每个pod生成不同的pvc,并绑定PV,从而实现各pod有专用存储。这就是为什么要用volumeclaimtemplate的原因。
(3)statefulset:定义具体应用,名为nginx,有三个pod副本,并为每个pod定义了一个域名部署statefulset。
配置文件的选项:
Kubectl explain sts.spec:查看主要字段解释
replicas:定义副本数
selector:标签选择器
servicename:必须关联到一个无头服务
template:定义pod模板(其中定义关联哪个存储卷)
volumeClaimTemplates:生成PVC
4.1 创建无头服务及statefulset
nginx-statefulset.yaml
[root@localhost nfs]# cat nginx-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
labels: {name: nginx-headless}
name: nginx-headless
#namespace: wubo
spec:
ports:
- {name: t9081, port: 81, protocol: TCP, targetPort: 80}
selector: {name: nginx-headless}
#type: NodePort
clusterIP: None ##注意此处的值,None表示无头服务
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-headless
#namespace: wubo
labels: {name: nginx-headless}
spec:
serviceName: "nginx-headless"
replicas: 3 #三个副本
selector:
matchLabels: {name: nginx-headless}
template:
metadata:
name: nginx-headless
labels: {name: nginx-headless}
spec:
containers:
- name: nginx-headless
#image: harbor.jettech.com/jettechtools/nginx:1.21.4
#image: 172.16.10.5:5000/library/nginx:1.21.4
image: docker.io/library/nginx:1.21.4
volumeMounts:
- name: volv
mountPath: /data
volumeClaimTemplates:
- metadata:
name: volv
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #managed-nfs-storage为我们创建的storage-class名称
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
此时不需要在创建PVC了。pod分布在3个node上,在nfs-server的/opt/vlumes 有三个目录
[root@localhost nfs]# kubectl get pv,pvc,pods,sc,statefulset -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-8601bae7-3182-43fa-b92b-da733f429427 1Gi RWO Delete Bound default/volv-nginx-headless-0 managed-nfs-storage 23s Filesystem
persistentvolume/pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba 1Gi RWO Delete Bound default/volv-nginx-headless-1 managed-nfs-storage 21s Filesystem
persistentvolume/pvc-8a974bbb-d6d6-484c-81b5-46cca9643288 1Gi RWO Delete Bound default/volv-nginx-headless-2 managed-nfs-storage 18s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/volv-nginx-headless-0 Bound pvc-8601bae7-3182-43fa-b92b-da733f429427 1Gi RWO managed-nfs-storage 24s Filesystem
persistentvolumeclaim/volv-nginx-headless-1 Bound pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba 1Gi RWO managed-nfs-storage 21s Filesystem
persistentvolumeclaim/volv-nginx-headless-2 Bound pvc-8a974bbb-d6d6-484c-81b5-46cca9643288 1Gi RWO managed-nfs-storage 18s Filesystem
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-848b88ddd4-rnq65 1/1 Running 0 3m32s 10.42.1.120 172.16.10.21
pod/nginx-headless-0 1/1 Running 0 24s 10.42.2.98 172.16.10.15
pod/nginx-headless-1 1/1 Running 0 21s 10.42.1.121 172.16.10.21
pod/nginx-headless-2 1/1 Running 0 18s 10.42.0.194 172.16.10.5
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/longhorn driver.longhorn.io Delete Immediate true 5d22h
storageclass.storage.k8s.io/managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 3m20s
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/nginx-headless 3/3 24s nginx-headless docker.io/library/nginx:1.21.4
物理机信息:
[root@localhost nfs]# ls /opt/vlumes/ -al
total 52220
drwxr-xr-x 6 root root 263 Jan 25 16:17 .
drwxr-xr-x 6 root root 65 Jan 25 10:33 ..
drwxrwxrwx 2 root root 6 Jan 25 16:17 default-volv-nginx-headless-0-pvc-8601bae7-3182-43fa-b92b-da733f429427
drwxrwxrwx 2 root root 6 Jan 25 16:17 default-volv-nginx-headless-1-pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba
drwxrwxrwx 2 root root 6 Jan 25 16:17 default-volv-nginx-headless-2-pvc-8a974bbb-d6d6-484c-81b5-46cca9643288
NFS Server上或进入各自的容器中操作也可以 :
[root@localhost nfs]# echo 172.16.10.15 > /opt/vlumes/default-volv-nginx-headless-0-pvc-8601bae7-3182-43fa-b92b-da733f429427/index.html
[root@localhost nfs]# echo 172.16.10.21 > /opt/vlumes/default-volv-nginx-headless-1-pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba/index.html
[root@localhost nfs]# echo 172.16.10.5 > /opt/vlumes/default-volv-nginx-headless-2-pvc-8a974bbb-d6d6-484c-81b5-46cca9643288/index.html
集群任意节点上:
#进入集群中任意pod中,解析nginx-headless 服务/ # nslookup nginx-headless
root@localhost nfs]# kubectl exec -it pod/nginx-headless-0 -- /bin/sh
# nslookup nginx-headless.default.svc.jettech.com
Name: nginx-headless
Address 1: 10.10.2.3 10-10-2-4.nginx-headless.default.svc.jettech.com #可以看到有两个地址
Address 2: 10.10.2.4 10-10-2-4.nginx-headless.default.svc.jettech.com
Address 3: 10.10.2.5 10-10-2-5.nginx-headless.default.svc.jettech.com
#curl 10.10.2.3
172.16.10.5
#curl 10.10.2.4
172.16.10.15
#curl 10.10.2.5
172.16.10.21
# cd /data
# ls
index.html
# cat index.html
172.16.10.15
# exit
[root@localhost nfs]# kubectl exec -it pod/nginx-headless-1 -- /bin/sh
# cd /data
# cat index.html
172.16.10.21
# exxit
/bin/sh: 3: exxit: not found
# exit
command terminated with exit code 127
[root@localhost nfs]# kubectl exec -it pod/nginx-headless-3 -- /bin/sh
Error from server (NotFound): pods "nginx-headless-3" not found
[root@localhost nfs]# kubectl exec -it pod/nginx-headless-2 -- /bin/sh
# cd /data
# cat index.html
172.16.10.5
# exit
[root@localhost nfs]#
#对于statefulset我们可以通过添加/删除pod副本的数量,观察PV/PVC的状态及变化.
5.1第一种配置
archiveOnDelete: "false"
reclaimPolicy: Delete #默认没有配置,默认值为Delete
测试结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 3.删除PVC后,PV被删除且NFS Server对应数据被删除
5.2 第二种配置
archiveOnDelete: "false"
reclaimPolicy: Retain
测试结果:
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中 5.3和4步骤可以不这样操作也可以使用,请参考:PV可在用2.4.2 删除 PVC章节
5.3 第三种配置
archiveOnDelete: "ture"
reclaimPolicy: Retain
结果
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
5.4 第四种配置
archiveOnDelete: "ture"
reclaimPolicy: Delete
结果
1.pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 2.sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用 3.删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留 4.重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中
总结:除以第一种配置外,其他三种配置在PV/PVC被删除后数据依然保留
使用storageclass的方式有两种一种是在创建pvc资源配置清单中指定使用。另外一种方法就是把它设置为默认的存储。
6.1 PVC中使用:
[root@localhost nfs]# cat test-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
## 这里指定了是用storageclass,如果storageclass是默认的存储方式则这里可以不用写。 看下面:storageclass:设置默认存储
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
6.2 设置默认存储,其他pvc可以直接使用:
6.2.1)设置默认
[root@localhost nfs]# cat nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "false" ## 是否设置为默认的storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
- hard ## 指定为硬挂载方式
- nfsvers=4 ## 指定NFS版本,这个需要根据NFS Server版本号设置
[root@localhost nfs]#
6.2.2)设置默认另一种方式设置默认存储
查看当前sc
[root@localhost nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn driver.longhorn.io Delete Immediate true 5d22h
managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 33m
#设置managed-nfs-storage为默认后端存储,managed-nfs-storage 为 storageclass 配置的名字
[root@localhost nfs]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/managed-nfs-storage patched
# 再次查看,注意时候有default标识
[root@localhost nfs]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn driver.longhorn.io Delete Immediate true 5d22h
managed-nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 35m
6.3 如何使用默认的StorageClass
如果集群有一个默认的StorageClass能够满足我们的需求,那么剩下所有需要做的就是创建PersistentVolumeClaim(PVC),剩下的都有默认的动态配置搞定,包括无需去指定storageClassName:
[root@localhost nfs]# cat pvc2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
#namespace: wubo
spec:
accessModes: #访问模式
#- ReadWriteOnce
#- ReadWriteOncePod
- ReadWriteMany
#- ReadOnlyMany
resources: #申请资源,8Gi存储空间
requests:
storage: 1Gi
#storageClassName: managed-nfs-storage
#selector:
# matchLabels:
# name: "wubo-pv1"
#matchExpressions:
# - {key: environment, operator: In, values: [dev]}
观察发现pvc和pv都多了一个
[root@localhost nfs]# kubectl get pv,pvc,pods,sc,statefulset -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pvc-8601bae7-3182-43fa-b92b-da733f429427 1Gi RWO Delete Bound default/volv-nginx-headless-0 managed-nfs-storage 36m Filesystem
persistentvolume/pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba 1Gi RWO Delete Bound default/volv-nginx-headless-1 managed-nfs-storage 36m Filesystem
persistentvolume/pvc-8a974bbb-d6d6-484c-81b5-46cca9643288 1Gi RWO Delete Bound default/volv-nginx-headless-2 managed-nfs-storage 36m Filesystem
persistentvolume/pvc-a1948238-ce02-4004-8963-3b3778f35454 1Gi RWX Delete Bound default/pvc2 managed-nfs-storage 2s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/volv-nginx-headless-0 Bound pvc-8601bae7-3182-43fa-b92b-da733f429427 1Gi RWO managed-nfs-storage 36m Filesystem
persistentvolumeclaim/volv-nginx-headless-1 Bound pvc-2800e24f-9d11-4be3-9676-4c629bcdf3ba 1Gi RWO managed-nfs-storage 36m Filesystem
persistentvolumeclaim/volv-nginx-headless-2 Bound pvc-8a974bbb-d6d6-484c-81b5-46cca9643288 1Gi RWO managed-nfs-storage 36m Filesystem
persistentvolumeclaim/pvc2 Bound pvc-a1948238-ce02-4004-8963-3b3778f35454 1Gi RWX managed-nfs-storage 2s Filesystem
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-client-provisioner-848b88ddd4-rnq65 1/1 Running 0 39m 10.42.1.120 172.16.10.21
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/longhorn driver.longhorn.io Delete Immediate true 5d22h
storageclass.storage.k8s.io/managed-nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 39m
6.4 关闭默认的StorageClass
不能删除默认的StorageClass,因为它是作为集群的k3s安装的,如果它被删除,会被重新安装。
[root@localhost nfs]# ls /var/lib/rancher/k3s/server/manifests/local-storage.yaml
/var/lib/rancher/k3s/server/manifests/local-storage.yaml
[root@localhost nfs]# cat /var/lib/rancher/k3s/server/manifests/local-storage.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
priorityClassName: "system-node-critical"
serviceAccountName: local-path-provisioner-service-account
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.20
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: kube-system
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/var/lib/rancher/k3s/storage"]
}
]
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
chmod 701 ${absolutePath}/..
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: rancher/mirrored-library-busybox:1.32.1
当然可以在安装k3s的时候选择不安装
[root@localhost nfs]# k3s server --help | grep disable
--etcd-disable-snapshots (db) Disable automatic etcd snapshots
--disable value (components) Do not deploy packaged components and delete any deployed components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
--disable-scheduler (components) Disable Kubernetes default scheduler
--disable-cloud-controller (components) Disable k3s default cloud controller manager
--disable-kube-proxy (components) Disable running kube-proxy
--disable-network-policy (components) Disable k3s default network policy controller
--disable-helm-controller (components) Disable Helm controller
[root@localhost nfs]# k3s server --disable local-storage
或者可以停掉默认的StorageClass行为,通过删除annotation:storageclass.kubernetes.io/is-default-class,或者设置为false。
[root@localhost nfs]# kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
[root@localhost nfs]# kubectl get storageclass local-path -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
creationTimestamp: "2022-01-25T09:01:48Z"
name: local-path
resourceVersion: "519166"
uid: 8c710916-1b12-4844-9eb4-e9cce6de5244
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
[root@localhost nfs]# kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/local-path patched
[root@localhost nfs]# kubectl get storageclass local-path -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2022-01-25T09:01:48Z"
name: local-path
resourceVersion: "519244"
uid: 8c710916-1b12-4844-9eb4-e9cce6de5244
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
如果没有StorageClass对象标记默认的annotation,那么PersistentVolumeClaim对象(在不指定StorageClass情况下)将不自动触发动态配置。相反,它们将回到绑定可用的*PersistentVolume(PV)*的状态。
6.5 当删除PersistentVolumeClaim(PVC)会发生什么
如果一个卷是动态配置的卷,则默认的回收策略为“删除”。这意味着,在默认的情况下,当PVC被删除时,基础的PV和对应的存储也会被删除。如果需要保留存储在卷上的数据,则必须在PV被设置之后将回收策略从delete更改为retain。