Skip to main content
Ceph clusters provide distributed storage with high availability and scalability. This page allows you to create, configure, bootstrap, and manage your Ceph storage clusters.

Key Concepts

Cluster

A Ceph cluster is a collection of nodes running Ceph daemons that work together to provide storage services.

Bootstrap

The process of initializing a cluster by deploying Ceph components to nodes and creating the initial configuration.

Public Network

The network used for client-to-cluster communication and inter-daemon traffic.

Cluster Network

Optional dedicated network for OSD replication traffic to improve performance.

Required Permissions

ActionPermission
View Clustersiam:project:infrastructure:ceph:read
Create Clusteriam:project:infrastructure:ceph:write
Edit Clusteriam:project:infrastructure:ceph:write
Bootstrap Clusteriam:project:infrastructure:ceph:write
Reset Clusteriam:project:infrastructure:ceph:write
Delete Clusteriam:project:infrastructure:ceph:delete

Cluster Status

StatusDescription
ReadyCluster is bootstrapped and operational
Not ReadyCluster is registered but not yet bootstrapped
ActiveCluster is enabled for operations
DisabledCluster is disabled (no operations allowed)

How to View Clusters

1

Navigate to Clusters

Go to Ceph Storage > System > Clusters in the navigation menu.
2

View Cluster List

The list shows all registered Ceph clusters with their status, version, and network configuration.
3

Filter and Search

Use the search box to find clusters by name, version, or network. Filter by status (Active/Disabled).
4

Review Statistics

Check the summary cards for:
  • Total Clusters: All registered clusters
  • Ready: Bootstrapped and operational clusters
  • Not Ready: Clusters awaiting bootstrap
  • Active/Inactive: Enabled vs disabled clusters

How to Create a Cluster

1

Click Create Cluster

Click the Create Cluster button in the page header.
2

Configure Basic Settings

Enter the cluster configuration:
  • Cluster Name: Unique identifier for the cluster
  • Ceph Version: Select the Ceph release version
  • Status: Enable or disable the cluster
3

Configure Networks

Set up the network configuration:
  • Public Network: CIDR for client and monitor communication (required)
  • Cluster Network: CIDR for OSD replication (optional, improves performance)
4

Configure Dashboard (Optional)

Enable and configure the Ceph Dashboard:
  • Enable Dashboard: Toggle to enable the web UI
  • Dashboard User: Initial admin username
  • Dashboard Password: Initial admin password
5

Create

Click Create to register the cluster. The cluster will be in “Not Ready” state until bootstrapped.
Creating a cluster only registers it in the system. You must bootstrap the cluster to deploy Ceph components and make it operational.

How to View Cluster Details

1

Click Cluster Name

Click on a cluster name in the list to open the detail dashboard.
2

Review Dashboard

The detail page shows a comprehensive dashboard with:
  • Cluster Status: Overall health and status
  • Health Alerts: Active warnings and errors
  • Operations: Bootstrap and reset actions
  • Upgrade Status: Version and upgrade information
  • Storage Capacity: Used vs available storage
  • Cluster Config: Network and version settings
  • OSD Status: Object Storage Daemon health
  • Monitor Status: Monitor daemon quorum
  • PG Status: Placement Group health
  • Node Overview: All nodes in the cluster

How to Bootstrap a Cluster

Bootstrapping initializes a new cluster by deploying Ceph components to the configured nodes.
1

Open Cluster Details

Navigate to the cluster detail page.
2

Click Bootstrap

In the Operations card, click Bootstrap Cluster.
3

Configure Bootstrap Options

Review and configure bootstrap settings in the wizard:
  • Select nodes to include
  • Configure initial OSD deployment
  • Set replication settings
4

Start Bootstrap

Click Start Bootstrap to begin the process. Progress is shown in the operation logs panel.
5

Monitor Progress

The operation banner shows progress. Click View Logs to see detailed output.
Bootstrap is a long-running operation that deploys Ceph to all nodes. Do not interrupt the process. The cluster will be unavailable during bootstrap.

How to Reset a Cluster

Resetting removes all Ceph data and configuration from the cluster nodes, returning them to a clean state.
1

Open Cluster Details

Navigate to the cluster detail page.
2

Click Reset

In the Operations card, click Reset Cluster.
3

Confirm Reset

Review the warning and confirm you want to destroy all data on the cluster.
4

Monitor Progress

The reset process will remove all Ceph components and data from the nodes.
DESTRUCTIVE OPERATION: Resetting a cluster permanently destroys ALL data stored in the cluster. This action cannot be undone. Ensure you have backups before proceeding.

How to Edit a Cluster

1

Find the Cluster

Locate the cluster in the list.
2

Click Edit

Click the edit (pencil) icon in the actions column.
3

Modify Settings

Update the cluster configuration as needed:
  • Cluster name
  • Ceph version
  • Network settings
  • Dashboard configuration
  • Active/disabled status
4

Save

Click Save to apply changes.
Some settings may require a cluster reset and re-bootstrap to take effect, such as network changes.

How to Delete a Cluster

1

Find the Cluster

Locate the cluster in the list.
2

Click Delete

Click the delete (trash) icon in the actions column.
3

Confirm Deletion

Confirm you want to delete the cluster registration.
Deleting a cluster removes it from management. If the cluster is bootstrapped, you should reset it first to clean up the nodes. Otherwise, Ceph components will remain on the nodes.

Cluster Configuration Fields

FieldDescription
Cluster NameUnique identifier for the cluster
Ceph VersionRelease version (e.g., Reef, Quincy)
Public NetworkCIDR for client/monitor traffic (e.g., 10.0.0.0/24)
Cluster NetworkOptional CIDR for OSD replication
DashboardEnable Ceph Dashboard web UI
Dashboard UserInitial admin username
Dashboard PasswordInitial admin password
Cluster FSIDUnique cluster identifier (auto-generated)
Dashboard URLURL to access the Ceph Dashboard

Dashboard Cards

Cluster Status Card

Shows the overall cluster health with:
  • Health status (HEALTH_OK, HEALTH_WARN, HEALTH_ERR)
  • OSD summary (up/in counts)
  • Version information

Health Alerts Card

Displays active health checks and warnings from the cluster, such as:
  • Degraded PGs
  • OSD nearfull warnings
  • Clock skew
  • Slow OSD operations

Storage Capacity Card

Shows storage utilization:
  • Total capacity
  • Used storage
  • Available storage
  • Usage percentage with visual indicator

OSD Status Card

Displays OSD (Object Storage Daemon) health:
  • Total OSD count
  • Up/Down status
  • In/Out status

Monitor Status Card

Shows monitor quorum status:
  • Total monitors
  • Quorum members
  • Leader information

PG Status Card

Displays Placement Group health:
  • Total PGs
  • Active+clean count
  • Degraded/misplaced counts

Node Overview Card

Shows all nodes in the cluster with:
  • Hostname
  • Role
  • Status
  • Quick actions

Troubleshooting

  • This is expected. New clusters must be bootstrapped to become ready
  • Add nodes to the cluster first
  • Then bootstrap the cluster to deploy Ceph components
  • Check that all nodes are reachable via SSH
  • Verify network configuration matches node interfaces
  • Ensure sufficient disk space on nodes
  • Check operation logs for specific errors
  • Verify Ceph version compatibility with nodes
  • Check the Health Alerts card for specific issues
  • Common warnings: nearfull OSDs, slow requests, clock skew
  • Navigate to specific component pages (OSDs, Monitors) for details
  • Some warnings resolve automatically after rebalancing
  • Verify Dashboard is enabled in cluster settings
  • Check Dashboard URL is accessible from your network
  • Verify credentials are correct
  • Check that the mgr daemon is running
  • Some cleanup operations may take time
  • Check operation logs for progress
  • If stuck, nodes may need manual cleanup
  • SSH to nodes and verify Ceph processes are stopped
  • Ensure CIDR format is correct (e.g., 10.0.0.0/24)
  • Verify the network is routable between all nodes
  • Public network must be accessible by clients
  • Cluster network (if used) must be accessible by all OSDs

FAQ

Public Network carries all client-to-cluster traffic and inter-daemon communication. It’s required for cluster operation.Cluster Network is an optional dedicated network for OSD-to-OSD replication traffic. Using a separate cluster network improves performance by isolating replication traffic from client traffic.
Use a separate cluster network when:
  • You have high I/O workloads
  • Client traffic is competing with replication
  • You want to isolate replication for security
  • You have 10GbE+ network infrastructure
For smaller deployments, the public network alone is sufficient.
Yes, but this requires an upgrade process. Navigate to the cluster detail page and use the Upgrade feature to change versions. Upgrades must follow Ceph’s supported upgrade path.
Disabling a cluster (setting isActive to false) prevents new operations from being performed through this interface. The actual Ceph cluster continues running, but management operations are blocked.
Bootstrap duration depends on:
  • Number of nodes
  • Network speed between nodes
  • Number of OSDs to deploy
  • Ceph version being installed
Typical bootstraps take 10-30 minutes for small clusters.
The FSID (Filesystem ID) is a unique identifier for your Ceph cluster, generated during bootstrap. It’s used internally by Ceph to identify the cluster and should not be changed.
The Ceph Dashboard is optional but recommended. It provides:
  • Web-based cluster monitoring
  • Visual storage management
  • Performance graphs
  • Native Ceph administration
You can manage clusters through this interface without the Dashboard.
Yes. You can register and manage multiple Ceph clusters. Each cluster operates independently with its own nodes, storage, and configuration.