Skip to main content
Physical Disks provides a comprehensive view of all storage devices across your Ceph cluster nodes. This page helps you identify available disks for OSD creation, monitor disk usage, and understand why certain disks may not be usable.

Key Concepts

Physical Disk

A storage device (HDD, SSD, or NVMe) attached to a cluster node that can potentially be used as an OSD.

Available

A disk that meets all requirements and can be used to create a new OSD.

In Use

A disk that already has one or more OSDs deployed on it.

Rejected

A disk that cannot be used due to specific reasons (partitions, filesystem, size, etc.).

Required Permissions

ActionPermission
View Physical Disksiam:project:infrastructure:ceph:read
This page is read-only. To create OSDs from available disks, use the OSDs page.

Disk Status

StatusDescription
AvailableDisk is clean and can be used for OSD creation
In Use (OSD)Disk has one or more OSDs deployed (shows OSD IDs)
RejectedDisk cannot be used; hover to see rejection reasons
UnusedDisk is not available but has no OSDs or rejection reasons

Disk Types

TypeIconDescription
HDDHard DriveTraditional rotational hard disk drive
SSDDiscSolid-state drive
NVMeDiscHigh-performance NVMe solid-state drive

How to View Physical Disks

1

Select Cluster

Choose a Ceph cluster from the cluster dropdown. Only ready (bootstrapped) clusters show disk inventory.
2

View Statistics

Review the summary cards showing:
  • Total Disks: All physical disks across cluster nodes
  • Available: Disks ready for OSD creation
  • In Use (OSD): Disks with active OSDs
  • Rejected: Disks that cannot be used
  • HDD: Count of rotational drives
  • SSD/NVMe: Count of solid-state drives
3

Browse Disk List

The table shows all disks with device path, host, type, size, model, and status.
4

Filter and Search

Use filters to focus on specific disk states:
  • Available: Show only disks ready for use
  • In Use (OSD): Show only disks with OSDs
  • Rejected: Show only unusable disks
Search by device path, hostname, device ID, or model name.
5

View Rejection Reasons

For rejected disks, hover over the status badge to see why the disk cannot be used.

Disk Table Fields

FieldDescription
DeviceDevice path (e.g., /dev/sda) and device ID
HostNode where the disk is located
TypeDisk type: HDD, SSD, or NVMe
SizeTotal disk capacity
ModelDisk model name and vendor
StatusCurrent status with details

Understanding Disk Status

Available Disks

Available disks meet all requirements for OSD creation:
  • No existing partitions or filesystems
  • Sufficient size (typically > 5GB)
  • Not mounted or in use
  • Properly detected by the system
Use the OSDs page to create new OSDs from available disks. Available disks appear in the Add OSD drawer.

In Use (OSD) Disks

Disks with OSD status show which OSD IDs are using them:
  • Single OSD per disk is typical
  • Multiple OSDs may exist on large disks (with partitioning)
  • The OSD ID links the disk to the OSD configuration

Rejected Disks

Rejected disks have one or more reasons preventing OSD creation. Common rejection reasons include:
ReasonDescription
Has partitionsDisk has existing partition table
Has filesystemDisk has a filesystem (ext4, xfs, etc.)
Too smallDisk is below minimum size requirement
LockedDisk is locked by another process
Has holdersOther devices depend on this disk (dm, raid)
MountedDisk or partition is currently mounted
To make a rejected disk available, you must resolve the rejection reasons. This typically involves removing partitions, filesystems, or unmounting the disk.

Statistics Cards

Total Disks

The total count of all physical disks detected across all cluster nodes.

Available

Disks that are clean and ready for OSD creation. These appear in the Add OSD list on the OSDs page.

In Use (OSD)

Disks that already have Ceph OSDs deployed. Each disk shows its associated OSD ID(s).

Rejected

Disks that cannot be used for OSD creation due to existing data, partitions, or other restrictions.

HDD

Count of rotational (spinning) hard disk drives. HDDs are identified by the rotational property.

SSD/NVMe

Count of solid-state drives including both SATA SSDs and NVMe drives. These offer faster I/O than HDDs.

Troubleshooting

  • Verify the cluster is bootstrapped and ready
  • Check that nodes have the OSD role assigned
  • Ensure disks are properly connected and detected by the OS
  • Try refreshing the inventory
  • Hover over the Rejected badge to see specific reasons
  • Common causes: existing partitions, filesystems, or mounts
  • Use lsblk on the node to inspect the disk
  • Clean the disk with wipefs or dd if appropriate
  • Refresh both the Physical Disks and OSDs pages
  • Verify the disk shows as Available (not just Unused)
  • Check that the host has the OSD role in the cluster
  • Ensure no other process is using the disk
  • Disk type is detected from system properties
  • Virtual disks may not report type correctly
  • Check /sys/block/<device>/queue/rotational on the node
  • Type detection depends on disk firmware reporting
  • Disk may not be properly initialized
  • Check disk health with smartctl
  • Verify disk is detected in lsblk output
  • Some virtual disks report size differently
  • After OSD creation, the disk status changes to “In Use”
  • Refresh the page to see updated status
  • The disk will show the OSD ID in its status

FAQ

A disk is available when it:
  • Has no partition table
  • Has no filesystem
  • Is not mounted
  • Meets minimum size requirements
  • Is not used by LVM, RAID, or other device mappers
  • Is not locked by another process
To prepare a rejected disk:
  1. Identify the rejection reason by hovering over the status
  2. Unmount any mounted partitions
  3. Remove partitions with fdisk or parted
  4. Wipe filesystem signatures with wipefs -a /dev/sdX
  5. Refresh the inventory to see updated status
Wiping a disk destroys all data. Ensure you have backups before proceeding.
No. Ceph requires clean disks for OSD creation. You must remove all data, partitions, and filesystems before using a disk. This ensures data integrity and allows Ceph to manage the entire disk.
Unused status means the disk:
  • Is not available for OSD creation
  • Does not have an OSD deployed
  • Has no specific rejection reasons
This can happen with system disks, boot devices, or disks reserved for other purposes.
Disk type is detected from:
  • The rotational flag in /sys/block/<device>/queue/rotational
  • Device naming patterns (nvme* for NVMe)
  • SMART data and device identification
HDD: rotational = 1, SSD: rotational = 0, NVMe: device path starts with nvme
While technically possible through partitioning, it’s not recommended:
  • Modern best practice is one OSD per physical disk
  • Multiple OSDs per disk was common with older small disks
  • Single OSD per disk provides better failure isolation
  • Ceph orchestrator typically creates one OSD per disk
The minimum depends on your Ceph version and configuration:
  • Default minimum is typically 5GB
  • Production systems should use much larger disks
  • Very small disks are often rejected automatically
  • Check your Ceph configuration for specific limits
Some disks may be excluded:
  • System/boot disks are typically filtered out
  • Disks without proper block device entries
  • Removable media may be excluded
  • Disks in certain device mapper configurations
Check lsblk on the node to see all block devices.
The inventory is fetched from the Ceph Dashboard API when you:
  • Load or refresh the Physical Disks page
  • Add or remove OSDs
  • The underlying Ceph inventory updates periodically
Use the refresh button to get the latest inventory data.