Skip to main content
Persistent Volumes (PVs) are cluster-wide storage resources provisioned by administrators or dynamically by StorageClasses. They represent actual storage in the cluster that can be claimed by PVCs.

Key Concepts

PV

A cluster-scoped storage resource that exists independently of any pod lifecycle.

Reclaim Policy

Defines what happens to the PV when its PVC is deleted (Retain, Delete, Recycle).

Volume Source

The underlying storage backend (NFS, CSI, Local, Cloud disks, etc.).

Claim Reference

Reference to the PVC that has bound this volume.
PersistentVolumes are cluster-scoped resources. They are not bound to any namespace and can be claimed by PVCs from any namespace.

Required Permissions

ActionPermission
View PVsiam:project:infrastructure:kubernetes:read
Create PViam:project:infrastructure:kubernetes:write
Edit PViam:project:infrastructure:kubernetes:write
Delete PViam:project:infrastructure:kubernetes:delete

PV Status Values

StatusDescription
AvailablePV is free and not yet bound to a PVC
BoundPV is bound to a PVC
ReleasedPVC was deleted but the resource is not yet reclaimed
FailedAutomatic reclamation of the volume failed
PendingPV is being provisioned
A “Released” PV cannot be rebound to a new PVC automatically. The data must be manually handled based on the reclaim policy before the PV can be reused.

Reclaim Policies

PolicyDescription
RetainPV is kept after PVC deletion for manual data recovery (default for manually created PVs)
DeletePV and underlying storage are deleted when PVC is deleted (default for dynamic provisioning)
RecycleBasic scrub (rm -rf /volume/*) before making available again (deprecated)
Use Retain for production data that needs to be recoverable. Use Delete for ephemeral or easily reproducible data.

How to View PVs

1

Select Cluster

Choose a cluster from the cluster dropdown.
2

View List

The list shows all PersistentVolumes in the cluster (cluster-scoped, no namespace filter).
3

Filter and Search

Use the search box to find PVs by name, status, storage class, or volume source type. Filter by status (Available, Bound, Released, Failed).

How to View PV Details

1

Find the PV

Locate the PV in the list.
2

Click PV Name

Click on the PV name to open the detail drawer.
3

Review Details

View PV information including:
  • Overview: Name, status, storage class, reclaim policy, age
  • Capacity: Storage size and volume mode
  • Access Modes: How the volume can be accessed
  • Volume Source: Backend storage details (NFS path, CSI driver, etc.)
  • Claim: Reference to bound PVC (if any)
  • Node Affinity: Node constraints for local volumes
  • Labels & Annotations: Metadata attached to the PV
  • Events: Recent Kubernetes events

How to Create a PV

1

Click Create PV

Click the Create PV button in the page header.
2

Write YAML

Enter the PV manifest in YAML format. Key fields:
  • spec.capacity.storage - Storage capacity
  • spec.accessModes - How the volume can be accessed
  • spec.persistentVolumeReclaimPolicy - What happens when released
  • spec.storageClassName - Storage class for dynamic binding
  • Volume source (e.g., spec.nfs, spec.csi, spec.local)
3

Create

Click Create to apply the manifest.

How to Edit a PV

1

Open Actions Menu

Click the actions menu (three dots) on the PV row.
2

Click Edit YAML

Select Edit YAML to open the YAML editor.
3

Modify Spec

Edit the PV specification. Note that most fields are immutable after creation.
4

Save

Click Update to apply changes.
Most PV fields are immutable after creation, including capacity, access modes, and volume source. You can modify reclaim policy, labels, and annotations.

How to Delete a PV

1

Open Actions Menu

Click the actions menu on the PV row.
2

Click Delete

Select Delete from the menu.
3

Confirm

Confirm the deletion. This removes the PV from the cluster.
Deleting a PV that is bound to a PVC will fail. Delete or unbind the PVC first. Deleting a PV does NOT necessarily delete the underlying storage (depends on the storage backend).

Volume Source Types

PVs support various storage backends:
TypeDescription
CSIContainer Storage Interface drivers (modern, recommended)
NFSNetwork File System shares
LocalLocal node storage with node affinity
HostPathDirectory on the node (testing only)
iSCSIiSCSI block storage
FCFibre Channel storage
RBDCeph RADOS Block Device
CephFSCeph Filesystem
GlusterFSGlusterFS network filesystem
AWSElasticBlockStoreAWS EBS volumes
GCEPersistentDiskGoogle Cloud persistent disks
AzureDiskAzure managed disks
AzureFileAzure File shares

Example PVs

NFS Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    server: nfs-server.example.com
    path: /exports/data

Local Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 500Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - worker-node-1

CSI Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: csi-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: csi-standard
  csi:
    driver: ebs.csi.aws.com
    volumeHandle: vol-0123456789abcdef0
    fsType: ext4

Volume Modes

ModeDescription
FilesystemVolume is mounted as a directory (default)
BlockVolume is presented as a raw block device

Access Modes

ModeAbbreviationDescription
ReadWriteOnceRWOMounted read-write by a single node
ReadOnlyManyROXMounted read-only by multiple nodes
ReadWriteManyRWXMounted read-write by multiple nodes
ReadWriteOncePodRWOPMounted read-write by a single pod

Troubleshooting

  • The PVC was deleted but reclaim policy is Retain
  • Manually delete the spec.claimRef to make it Available again
  • Or delete and recreate the PV after backing up data
  • Consider changing reclaim policy for future PVs
  • Automatic reclamation failed (usually with Recycle policy)
  • Check events for error details
  • Manually handle the volume and recreate if needed
  • Recycle policy is deprecated; use Delete or Retain instead
  • Check access modes match between PVC and PV
  • Verify capacity: PV capacity must be >= PVC request
  • Ensure storage class matches (or both are empty for no class)
  • Check if PV is already bound to another PVC
  • For local volumes, verify node affinity allows scheduling
  • PV may still be bound to a PVC
  • Delete the PVC first, then delete the PV
  • Check for finalizers blocking deletion
  • Verify you have delete permissions
  • Verify the path exists on the specified node
  • Check node affinity configuration
  • Ensure pod is scheduled to the correct node
  • Verify directory permissions on the node
  • Verify NFS server is accessible from cluster nodes
  • Check NFS export permissions
  • Ensure NFS client packages are installed on nodes
  • Verify firewall allows NFS traffic

FAQ

PV (Persistent Volume) is the actual storage resource provisioned by an admin or dynamically. PVC (Persistent Volume Claim) is a request for storage by a user. PVCs bind to PVs to use the storage.
Create PVs manually when:
  • Using storage that doesn’t support dynamic provisioning
  • Pre-provisioning storage for specific workloads
  • Using local storage with node affinity
  • Migrating from existing storage systems
Use dynamic provisioning (StorageClass) for cloud environments and CSI drivers.
Not directly. PV capacity is immutable after creation. To resize:
  1. Back up your data
  2. Create a new larger PV
  3. Migrate data to the new volume
  4. Update workloads to use the new PVC
The PV enters “Released” state and keeps its data. It cannot be automatically bound to a new PVC. You must manually clear the claimRef or delete/recreate the PV to reuse it.
No. A PV can only be bound to one PVC at a time. For shared storage, use a volume type that supports ReadWriteMany (like NFS) and create separate PVs or use a CSI driver with volume sharing capabilities.
CSI (Container Storage Interface) is the modern standard:
  • Maintained by storage vendors, not Kubernetes core
  • Features don’t require Kubernetes upgrades
  • Supports snapshots, cloning, and expansion
  • In-tree plugins (awsElasticBlockStore, gcePersistentDisk) are deprecated
Local volumes are for production use with:
  • Node affinity (binds to specific node)
  • Proper lifecycle management
  • Support for dynamic provisioning
HostPath is for testing only and should not be used in production.
  1. Create a new PV with desired configuration
  2. Create a new PVC bound to the new PV
  3. Run a migration pod with both volumes mounted
  4. Copy data from old to new volume
  5. Update workloads to use the new PVC
  6. Delete old PVC/PV after verification