Skip to main content
The CRUSH (Controlled Replication Under Scalable Hashing) Map defines how Ceph distributes data across your cluster. This page provides a visual representation of your cluster’s topology, showing the hierarchy from the root down to individual OSDs.

Key Concepts

CRUSH Map

A hierarchical map that defines the physical topology of your cluster for data placement decisions.

CRUSH Rules

Rules that determine how data is distributed across the topology based on failure domains.

Bucket

A container in the CRUSH hierarchy (root, datacenter, rack, host) that groups lower-level items.

CRUSH Weight

A value representing the relative storage capacity of an OSD, typically measured in TiB.

Required Permissions

ActionPermission
View CRUSH Mapiam:project:infrastructure:ceph:read
The CRUSH Map page is read-only. Modifying the CRUSH topology requires direct Ceph CLI access or Pool configuration changes.

CRUSH Hierarchy Levels

The CRUSH map organizes storage in a hierarchical tree structure. Each level represents a failure domain:
LevelIconDescription
RootNetworkThe top-level container for the entire cluster
DatacenterDatabasePhysical data center location
RoomBoxServer room or zone within a datacenter
RackLayersPhysical rack containing servers
HostServerIndividual server/node running Ceph
OSDHard DriveObject Storage Daemon managing a physical disk
The hierarchy depth depends on your cluster configuration. Simple clusters may only have root → host → osd levels, while larger deployments include datacenter, room, and rack levels.

How to View the CRUSH Map

1

Select Cluster

Choose a Ceph cluster from the cluster dropdown. Only ready (bootstrapped) clusters are available.
2

View Statistics

Review the summary cards showing:
  • Total OSDs: All OSDs in the CRUSH map
  • Up: OSDs with running daemons
  • In: OSDs participating in data placement
  • Hosts: Number of host nodes
  • TiB Total: Combined CRUSH weight (capacity)
3

Explore the Tree

The OSD Tree shows the complete cluster topology:
  • Click on nodes with children to expand/collapse
  • Root and host levels are expanded by default
  • Use Expand All to see the complete tree
  • Use Collapse All to show only the root level
4

Review Node Details

Each node displays:
  • Name: Node identifier
  • Type: Level in the hierarchy (badge)
  • Device Class: For OSDs (HDD, SSD, NVMe)
  • Status: For OSDs (up/down, in/out)
  • CRUSH Weight: Storage capacity in TiB

Understanding the OSD Tree

Node Information

Each node in the tree displays relevant information based on its type: Bucket Nodes (root, datacenter, room, rack, host):
  • Name and type badge
  • Number of children (when collapsed)
  • Aggregate CRUSH weight
OSD Nodes:
  • OSD identifier (e.g., osd.0)
  • Device class (HDD, SSD, NVMe)
  • Status: up/in, up/out, down/in, down/out
  • Individual CRUSH weight

OSD Status Indicators

StatusColorMeaning
up/inGreenHealthy, actively storing data
up/outAmberRunning but not receiving new data
down/inRedNot running, expected to return
down/outRedNot running, excluded from placement

CRUSH Weight

CRUSH weight determines how much data an OSD receives relative to others:
  • Measured in TiB (Tebibytes)
  • Higher weight = more data assigned
  • Typically matches the physical disk size
  • Can be adjusted to balance data distribution
The total CRUSH weight shown in the statistics represents the sum of all OSD weights, giving you an overview of total cluster capacity as seen by CRUSH.

Failure Domains

CRUSH uses the hierarchy to ensure data safety:
  • Replicated pools place replicas in different failure domains
  • Default rule typically separates replicas by host
  • Datacenter-aware rules separate replicas by datacenter
  • Rack-aware rules separate replicas by rack
A well-designed CRUSH hierarchy ensures that losing a single failure domain (host, rack, or datacenter) doesn’t cause data loss.

Statistics Cards

Total OSDs

The total number of OSD entries in the CRUSH map, regardless of status.

Up

OSDs with running daemon processes. These OSDs are operational and can serve I/O requests.

In

OSDs included in the data placement map. Only “in” OSDs receive data according to CRUSH rules.

Hosts

The number of unique host-level buckets in the CRUSH tree.

TiB Total

The sum of all OSD CRUSH weights, representing the total capacity visible to the CRUSH algorithm.

Troubleshooting

  • Verify the cluster is bootstrapped and ready
  • Check that OSDs have been added to the cluster
  • Ensure you have read permission for Ceph resources
  • Try refreshing the page
  • Check if the OSD daemon is running on the host
  • Verify network connectivity to the OSD host
  • Check for disk failures or hardware issues
  • Review OSD logs for errors
  • The hierarchy depends on your cluster configuration
  • Simple clusters may lack datacenter/rack levels
  • Host labels determine placement in the tree
  • Check bucket definitions in CRUSH rules
  • Weights are set during OSD creation
  • Can be manually adjusted with ceph osd reweight
  • Weights may differ from physical size by design
  • Check for intentional reweighting
  • Newly added OSDs may take time to appear
  • Check if OSDs are properly registered
  • Verify the cluster orchestrator status
  • Refresh the page to get latest data

FAQ

CRUSH (Controlled Replication Under Scalable Hashing) is Ceph’s algorithm for determining where data should be stored. It:
  • Distributes data across OSDs without a central lookup table
  • Ensures data redundancy across failure domains
  • Scales to thousands of OSDs without performance degradation
  • Enables data movement when the cluster topology changes
Failure domains are groups of components that can fail together:
  • Host: If a server fails, all its OSDs fail
  • Rack: Power or network failure affects all servers
  • Datacenter: Natural disaster affects entire location
CRUSH places data replicas in different failure domains to survive failures.
The CRUSH hierarchy is customizable:
  • Small clusters often use only root → host → osd
  • Additional levels are added for larger deployments
  • Levels should match your physical infrastructure
  • Unnecessary levels add complexity without benefit
CRUSH Weight: The base capacity of an OSD, typically matching disk size. Used by CRUSH for initial data distribution.Reweight: A multiplier (0.0-1.0) applied on top of CRUSH weight. Used to temporarily reduce an OSD’s share of data without changing its CRUSH weight.
CRUSH distributes data proportionally based on weights:
  • A 4TB OSD with weight 4.0 gets twice the data of a 2TB OSD with weight 2.0
  • This assumes weights are set correctly
  • Mismatched weights cause uneven utilization
This page is read-only for viewing the topology. To modify CRUSH:
  • Use CRUSH rules in Pool configuration
  • Add/remove OSDs through the OSDs page
  • Use Ceph CLI for advanced CRUSH editing
  • Move hosts between buckets via CLI
Expanding a bucket reveals its children:
  • Root shows datacenters, rooms, racks, or hosts
  • Host shows all OSDs on that server
  • The hierarchy depth varies by cluster configuration
  • Each level shows aggregate weight of children
An OSD can be up/out when:
  • Manually marked out for maintenance
  • Being drained before removal
  • Reweight set to 0
  • Preparing for disk replacement
Data migrates away from out OSDs while they remain running.
Device classes (HDD, SSD, NVMe) enable class-specific placement:
  • Pools can target specific device classes
  • Enables SSD-only pools for high-performance workloads
  • HDD pools for bulk storage
  • Mixed configurations are possible with CRUSH rules