kubernetes persistent volume ReadWriteOnly(RWO) does not work for nfs -


there,

according doc:

readwriteonce – volume can mounted read-write single node

i created pv based on nfs:

apiversion: v1 kind: persistentvolume metadata:   name: tspv01 spec:   capacity:     storage: 15gi   accessmodes:     - readwriteonce   persistentvolumereclaimpolicy: recycle   nfs:     path: /gpfs/fs01/shared/prod/democluster01/dashdb/gamestop/spv01     server: 169.55.11.79 

a pvc pv:

kind: persistentvolumeclaim apiversion: v1 metadata:   name: sclaim spec:   accessmodes:     - readwriteonce   resources:     requests:       storage: 15gi 

after create pvc bind pv:

root@hydra-cdsdev-dal09-0001:~/testscript# kubectl pvc name      status    volume    capacity   accessmodes   age sclaim    bound     tspv01    15gi       rwo           4m 

then created 2 pods using same pvc:

pod1:

kind: pod apiversion: v1 metadata:   name: mypodshared1   labels:     name: frontendhttp spec:   containers:     - name: myfrontend       image: nginx       ports:         - containerport: 80           name: "http-server"       volumemounts:       - mountpath: "/usr/share/nginx/html"         name: mypd   volumes:     - name: mypd       persistentvolumeclaim:        claimname: sclaim 

pod2:

kind: pod apiversion: v1 metadata:   name: mypodshared2   labels:     name: frontendhttp spec:   containers:     - name: myfrontend       image: nginx       ports:         - containerport: 80           name: "http-server"       volumemounts:       - mountpath: "/usr/share/nginx/html"         name: mypd   volumes:     - name: mypd       persistentvolumeclaim:        claimname: sclaim 

after create 2 pods, assigned 2 different nodes. , can exec container, , can read&write in nfs mounted folder.

root@hydra-cdsdev-dal09-0001:~/testscript# kubectl pod -o wide name                        ready     status              restarts   age       ip            node mypodshared1                1/1       running             0          18s       172.17.52.7   169.45.189.108 mypodshared2                1/1       running             0          36s       172.17.83.9   169.45.189.116 

anybody know why happened?

the accessmodes dependent upon storage provider. nfs don't different, hostpath should use modes correctly.

see following table various options: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes


Comments

Popular posts from this blog

aws api gateway - SerializationException in posting new Records via Dynamodb Proxy Service in API -

asp.net - Problems sending emails from forum -