Key Concepts
PV
A cluster-scoped storage resource that exists independently of any pod lifecycle.
Reclaim Policy
Defines what happens to the PV when its PVC is deleted (Retain, Delete, Recycle).
Volume Source
The underlying storage backend (NFS, CSI, Local, Cloud disks, etc.).
Claim Reference
Reference to the PVC that has bound this volume.
PersistentVolumes are cluster-scoped resources. They are not bound to any namespace and can be claimed by PVCs from any namespace.
Required Permissions
| Action | Permission |
|---|---|
| View PVs | iam:project:infrastructure:kubernetes:read |
| Create PV | iam:project:infrastructure:kubernetes:write |
| Edit PV | iam:project:infrastructure:kubernetes:write |
| Delete PV | iam:project:infrastructure:kubernetes:delete |
PV Status Values
| Status | Description |
|---|---|
| Available | PV is free and not yet bound to a PVC |
| Bound | PV is bound to a PVC |
| Released | PVC was deleted but the resource is not yet reclaimed |
| Failed | Automatic reclamation of the volume failed |
| Pending | PV is being provisioned |
Reclaim Policies
| Policy | Description |
|---|---|
| Retain | PV is kept after PVC deletion for manual data recovery (default for manually created PVs) |
| Delete | PV and underlying storage are deleted when PVC is deleted (default for dynamic provisioning) |
| Recycle | Basic scrub (rm -rf /volume/*) before making available again (deprecated) |
How to View PVs
View List
The list shows all PersistentVolumes in the cluster (cluster-scoped, no namespace filter).
How to View PV Details
Review Details
View PV information including:
- Overview: Name, status, storage class, reclaim policy, age
- Capacity: Storage size and volume mode
- Access Modes: How the volume can be accessed
- Volume Source: Backend storage details (NFS path, CSI driver, etc.)
- Claim: Reference to bound PVC (if any)
- Node Affinity: Node constraints for local volumes
- Labels & Annotations: Metadata attached to the PV
- Events: Recent Kubernetes events
How to Create a PV
Write YAML
Enter the PV manifest in YAML format. Key fields:
spec.capacity.storage- Storage capacityspec.accessModes- How the volume can be accessedspec.persistentVolumeReclaimPolicy- What happens when releasedspec.storageClassName- Storage class for dynamic binding- Volume source (e.g.,
spec.nfs,spec.csi,spec.local)
How to Edit a PV
How to Delete a PV
Volume Source Types
PVs support various storage backends:| Type | Description |
|---|---|
| CSI | Container Storage Interface drivers (modern, recommended) |
| NFS | Network File System shares |
| Local | Local node storage with node affinity |
| HostPath | Directory on the node (testing only) |
| iSCSI | iSCSI block storage |
| FC | Fibre Channel storage |
| RBD | Ceph RADOS Block Device |
| CephFS | Ceph Filesystem |
| GlusterFS | GlusterFS network filesystem |
| AWSElasticBlockStore | AWS EBS volumes |
| GCEPersistentDisk | Google Cloud persistent disks |
| AzureDisk | Azure managed disks |
| AzureFile | Azure File shares |
Example PVs
NFS Volume
Local Volume
CSI Volume
Volume Modes
| Mode | Description |
|---|---|
| Filesystem | Volume is mounted as a directory (default) |
| Block | Volume is presented as a raw block device |
Access Modes
| Mode | Abbreviation | Description |
|---|---|---|
| ReadWriteOnce | RWO | Mounted read-write by a single node |
| ReadOnlyMany | ROX | Mounted read-only by multiple nodes |
| ReadWriteMany | RWX | Mounted read-write by multiple nodes |
| ReadWriteOncePod | RWOP | Mounted read-write by a single pod |
Troubleshooting
PV stuck in Released status
PV stuck in Released status
- The PVC was deleted but reclaim policy is Retain
- Manually delete the
spec.claimRefto make it Available again - Or delete and recreate the PV after backing up data
- Consider changing reclaim policy for future PVs
PV shows Failed status
PV shows Failed status
- Automatic reclamation failed (usually with Recycle policy)
- Check events for error details
- Manually handle the volume and recreate if needed
- Recycle policy is deprecated; use Delete or Retain instead
PVC cannot bind to PV
PVC cannot bind to PV
- Check access modes match between PVC and PV
- Verify capacity: PV capacity must be >= PVC request
- Ensure storage class matches (or both are empty for no class)
- Check if PV is already bound to another PVC
- For local volumes, verify node affinity allows scheduling
Cannot delete PV
Cannot delete PV
- PV may still be bound to a PVC
- Delete the PVC first, then delete the PV
- Check for finalizers blocking deletion
- Verify you have delete permissions
Local PV not accessible
Local PV not accessible
- Verify the path exists on the specified node
- Check node affinity configuration
- Ensure pod is scheduled to the correct node
- Verify directory permissions on the node
NFS PV mount issues
NFS PV mount issues
- Verify NFS server is accessible from cluster nodes
- Check NFS export permissions
- Ensure NFS client packages are installed on nodes
- Verify firewall allows NFS traffic
FAQ
What is the difference between PV and PVC?
What is the difference between PV and PVC?
PV (Persistent Volume) is the actual storage resource provisioned by an admin or dynamically. PVC (Persistent Volume Claim) is a request for storage by a user. PVCs bind to PVs to use the storage.
When should I create PVs manually?
When should I create PVs manually?
Create PVs manually when:
- Using storage that doesn’t support dynamic provisioning
- Pre-provisioning storage for specific workloads
- Using local storage with node affinity
- Migrating from existing storage systems
Can I resize a PV?
Can I resize a PV?
Not directly. PV capacity is immutable after creation. To resize:
- Back up your data
- Create a new larger PV
- Migrate data to the new volume
- Update workloads to use the new PVC
What happens when I delete a PVC with Retain policy?
What happens when I delete a PVC with Retain policy?
The PV enters “Released” state and keeps its data. It cannot be automatically bound to a new PVC. You must manually clear the claimRef or delete/recreate the PV to reuse it.
Can multiple PVCs use the same PV?
Can multiple PVCs use the same PV?
No. A PV can only be bound to one PVC at a time. For shared storage, use a volume type that supports ReadWriteMany (like NFS) and create separate PVs or use a CSI driver with volume sharing capabilities.
Why use CSI over in-tree volume plugins?
Why use CSI over in-tree volume plugins?
CSI (Container Storage Interface) is the modern standard:
- Maintained by storage vendors, not Kubernetes core
- Features don’t require Kubernetes upgrades
- Supports snapshots, cloning, and expansion
- In-tree plugins (awsElasticBlockStore, gcePersistentDisk) are deprecated
How do local volumes differ from hostPath?
How do local volumes differ from hostPath?
Local volumes are for production use with:
- Node affinity (binds to specific node)
- Proper lifecycle management
- Support for dynamic provisioning
How do I migrate data between PVs?
How do I migrate data between PVs?
- Create a new PV with desired configuration
- Create a new PVC bound to the new PV
- Run a migration pod with both volumes mounted
- Copy data from old to new volume
- Update workloads to use the new PVC
- Delete old PVC/PV after verification