我在我的教科书中安装了1.13.1的cher子,并选择了K8s 诉1.29.3 ,用于我的一个 no子环境。 我发现,东道餐厅没有与当地的储藏室合作。 这里是我的Yaml档案:
# 安装postgres脚本
# 定义一个命名空间
apiVersion: v1
kind: Namespace
metadata:
name: postgresd
---
# 定义配置
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: postgresd
data:
POSTGRES_DB: postgres
MAX_CONNECTIONS: "10000"
LOG_MIN_DURATION_STATEMENT: "500ms"
---
# 定义存储卷
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-pv
labels:
app: postgres-pv
spec:
capacity:
storage: 5Gi # 根据实际需求设置存储容量
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: local-path
hostPath:
path: "/Users/hulei/Workspaces/postgres/pg_data"
type: "Directory"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- rancher-desktop
---
# 定义用户名、密码等敏感信息
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: postgresd
type: Opaque
data:
postgres-user: cG9zdGdyZXM= # Base64编码的用户名,这里是"postgres"
postgres-password: U2VjdXJlUGFzc3dvcmQ= # Base64编码的密码,这里是"SecurePassword"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
namespace: postgresd
spec:
replicas: 1
selector:
matchLabels:
app: postgres
serviceName: postgres-serive
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgresd
image: postgres:16.2-alpine3.19
ports:
- containerPort: 5432
name: postgresd-port
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: POSTGRES_DB
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
subPath: data
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
selector:
matchLabels:
app: postgres-pv
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
storageClassName: local-path
---
# 定义将5432端口映射到kind cluster 5432端口的service
apiVersion: v1
kind: Service
metadata:
namespace: postgresd
name: postgres-service
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
type: ClusterIP
如果将档案交给库宾特尔,则可以制造该物体,但东道方-pv,I则使用kubectl,描述良好状态-0-n postgresd
,这种正状态一直悬而未决,如:
Name: postgres-statefulset-0
Namespace: postgresd
Priority: 0
Service Account: default
Node: <none>
Labels: app=postgres
apps.kubernetes.io/pod-index=0
controller-revision-hash=postgres-statefulset-5dcd5ff44d
statefulset.kubernetes.io/pod-name=postgres-statefulset-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/postgres-statefulset
Containers:
postgresd:
Image: postgres:16.2-alpine3.19
Port: 5432/TCP
Host Port: 0/TCP
Environment:
POSTGRES_USER: <set to the key postgres-user in secret postgres-secret > Optional: false
POSTGRES_PASSWORD: <set to the key postgres-password in secret postgres-secret > Optional: false
POSTGRES_DB: <set to the key POSTGRES_DB of config map postgres-config > Optional: false
Mounts:
/var/lib/postgresql/data from postgres-data (rw,path="data")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m56hs (ro)
Volumes:
postgres-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-data-postgres-statefulset-0
ReadOnly: false
kube-api-access-m56hs:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
<代码>kubectl>描述了pv postgres-data-pv输出,如:
Name: postgres-data-pv
Labels: app=postgres-pv
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-path
Status: Available
Claim:
Reclaim Policy: Recycle
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [rancher-desktop]
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /Users/hulei/Workspaces/postgres/pg_data
HostPathType: Directory
Events: <none>
<代码>kubectl>描述了状态状态状态,即:
Name: postgres-statefulset
Namespace: postgresd
CreationTimestamp: Sat, 06 Apr 2024 19:16:04 +0800
Selector: app=postgres
Labels: <none>
Annotations: <none>
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=postgres
Containers:
postgresd:
Image: postgres:16.2-alpine3.19
Port: 5432/TCP
Host Port: 0/TCP
Environment:
POSTGRES_USER: <set to the key postgres-user in secret postgres-secret > Optional: false
POSTGRES_PASSWORD: <set to the key postgres-password in secret postgres-secret > Optional: false
POSTGRES_DB: <set to the key POSTGRES_DB of config map postgres-config > Optional: false
Mounts:
/var/lib/postgresql/data from postgres-data (rw,path="data")
Volumes: <none>
Volume Claims:
Name: postgres-data
StorageClass: local-path
Labels: <none>
Annotations: <none>
Capacity: 5Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 5m13s statefulset-controller create Claim postgres-data-postgres-statefulset-0 Pod postgres-statefulset-0 in StatefulSet postgres-statefulset success
Normal SuccessfulCreate 5m13s statefulset-controller create Pod postgres-statefulset-0 in StatefulSet postgres-statefulset successful
what s the problem? I confused!
By the way,if I use docker-desktop with hostpath storageclass the file worked well ,the data files can be stored in /Users/<username>/Workspaces/postgres/pg_data/data
。
Thank you David Maze!
The kubectl describe pvc postgres-data-postgres-statefulset-0 -n postgresd
output like this:
Name: postgres-data-postgres-statefulset-0
Namespace: postgresd
StorageClass: local-path
Status: Pending
Volume:
Labels: app=postgres
Annotations: volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
volume.kubernetes.io/selected-node: lima-rancher-desktop
volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: postgres-statefulset-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 21m persistentvolume-controller waiting for first consumer to be created before binding
Normal Provisioning 13m (x7 over 21m) rancher.io/local-path_local-path-provisioner-6c86858495-dfbv9_42b808e8-c338-4145-be53-b507a9504caa External provisioner is provisioning volume for claim "postgresd/postgres-data-postgres-statefulset-0"
Warning ProvisioningFailed 13m (x7 over 21m) rancher.io/local-path_local-path-provisioner-6c86858495-dfbv9_42b808e8-c338-4145-be53-b507a9504caa failed to provision volume with StorageClass "local-path": claim.Spec.Selector is not supported
Normal ExternalProvisioning 92s (x83 over 21m) persistentvolume-controller Waiting for a volume to be created either by the external provisioner rancher.io/local-path or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Update!
rancher doc said that override.yaml can mount host path to the lima-rancher-desktop,so I created override.yaml in ~/Library/Application Support/rancher-desktop/lima/_config/
version: 0.0.1
disks:
- mount:
path: /Users/hulei/Workspaces
hostPath: /var/lib/rancher/k3s/storage/
writeable: true
then restart rancher-desktop agian.
everything seems ok, pod running statefulset running pv and pvc bound,but app no worked!
when I run kubectl logs pod-name
show that:chrown:/var/lib/postgresql permission deny