Key Concepts
CRUSH Map
A hierarchical map that defines the physical topology of your cluster for data placement decisions.
CRUSH Rules
Rules that determine how data is distributed across the topology based on failure domains.
Bucket
A container in the CRUSH hierarchy (root, datacenter, rack, host) that groups lower-level items.
CRUSH Weight
A value representing the relative storage capacity of an OSD, typically measured in TiB.
Required Permissions
| Action | Permission |
|---|---|
| View CRUSH Map | iam:project:infrastructure:ceph:read |
The CRUSH Map page is read-only. Modifying the CRUSH topology requires direct Ceph CLI access or Pool configuration changes.
CRUSH Hierarchy Levels
The CRUSH map organizes storage in a hierarchical tree structure. Each level represents a failure domain:| Level | Icon | Description |
|---|---|---|
| Root | Network | The top-level container for the entire cluster |
| Datacenter | Database | Physical data center location |
| Room | Box | Server room or zone within a datacenter |
| Rack | Layers | Physical rack containing servers |
| Host | Server | Individual server/node running Ceph |
| OSD | Hard Drive | Object Storage Daemon managing a physical disk |
How to View the CRUSH Map
Select Cluster
Choose a Ceph cluster from the cluster dropdown. Only ready (bootstrapped) clusters are available.
View Statistics
Review the summary cards showing:
- Total OSDs: All OSDs in the CRUSH map
- Up: OSDs with running daemons
- In: OSDs participating in data placement
- Hosts: Number of host nodes
- TiB Total: Combined CRUSH weight (capacity)
Explore the Tree
The OSD Tree shows the complete cluster topology:
- Click on nodes with children to expand/collapse
- Root and host levels are expanded by default
- Use Expand All to see the complete tree
- Use Collapse All to show only the root level
Understanding the OSD Tree
Node Information
Each node in the tree displays relevant information based on its type: Bucket Nodes (root, datacenter, room, rack, host):- Name and type badge
- Number of children (when collapsed)
- Aggregate CRUSH weight
- OSD identifier (e.g., osd.0)
- Device class (HDD, SSD, NVMe)
- Status: up/in, up/out, down/in, down/out
- Individual CRUSH weight
OSD Status Indicators
| Status | Color | Meaning |
|---|---|---|
| up/in | Green | Healthy, actively storing data |
| up/out | Amber | Running but not receiving new data |
| down/in | Red | Not running, expected to return |
| down/out | Red | Not running, excluded from placement |
CRUSH Weight
CRUSH weight determines how much data an OSD receives relative to others:- Measured in TiB (Tebibytes)
- Higher weight = more data assigned
- Typically matches the physical disk size
- Can be adjusted to balance data distribution
The total CRUSH weight shown in the statistics represents the sum of all OSD weights, giving you an overview of total cluster capacity as seen by CRUSH.
Failure Domains
CRUSH uses the hierarchy to ensure data safety:- Replicated pools place replicas in different failure domains
- Default rule typically separates replicas by host
- Datacenter-aware rules separate replicas by datacenter
- Rack-aware rules separate replicas by rack
Statistics Cards
Total OSDs
The total number of OSD entries in the CRUSH map, regardless of status.Up
OSDs with running daemon processes. These OSDs are operational and can serve I/O requests.In
OSDs included in the data placement map. Only “in” OSDs receive data according to CRUSH rules.Hosts
The number of unique host-level buckets in the CRUSH tree.TiB Total
The sum of all OSD CRUSH weights, representing the total capacity visible to the CRUSH algorithm.Troubleshooting
CRUSH map shows no data
CRUSH map shows no data
- Verify the cluster is bootstrapped and ready
- Check that OSDs have been added to the cluster
- Ensure you have read permission for Ceph resources
- Try refreshing the page
OSD shows down/out status
OSD shows down/out status
- Check if the OSD daemon is running on the host
- Verify network connectivity to the OSD host
- Check for disk failures or hardware issues
- Review OSD logs for errors
Unexpected hierarchy structure
Unexpected hierarchy structure
- The hierarchy depends on your cluster configuration
- Simple clusters may lack datacenter/rack levels
- Host labels determine placement in the tree
- Check bucket definitions in CRUSH rules
CRUSH weights don't match disk sizes
CRUSH weights don't match disk sizes
- Weights are set during OSD creation
- Can be manually adjusted with
ceph osd reweight - Weights may differ from physical size by design
- Check for intentional reweighting
Missing hosts or OSDs in the tree
Missing hosts or OSDs in the tree
- Newly added OSDs may take time to appear
- Check if OSDs are properly registered
- Verify the cluster orchestrator status
- Refresh the page to get latest data
FAQ
What is CRUSH and why is it important?
What is CRUSH and why is it important?
CRUSH (Controlled Replication Under Scalable Hashing) is Ceph’s algorithm for determining where data should be stored. It:
- Distributes data across OSDs without a central lookup table
- Ensures data redundancy across failure domains
- Scales to thousands of OSDs without performance degradation
- Enables data movement when the cluster topology changes
What are failure domains?
What are failure domains?
Failure domains are groups of components that can fail together:
- Host: If a server fails, all its OSDs fail
- Rack: Power or network failure affects all servers
- Datacenter: Natural disaster affects entire location
Why do some clusters lack datacenter or rack levels?
Why do some clusters lack datacenter or rack levels?
The CRUSH hierarchy is customizable:
- Small clusters often use only root → host → osd
- Additional levels are added for larger deployments
- Levels should match your physical infrastructure
- Unnecessary levels add complexity without benefit
What is the difference between weight and reweight?
What is the difference between weight and reweight?
CRUSH Weight: The base capacity of an OSD, typically matching disk size. Used by CRUSH for initial data distribution.Reweight: A multiplier (0.0-1.0) applied on top of CRUSH weight. Used to temporarily reduce an OSD’s share of data without changing its CRUSH weight.
How does CRUSH handle uneven OSD sizes?
How does CRUSH handle uneven OSD sizes?
CRUSH distributes data proportionally based on weights:
- A 4TB OSD with weight 4.0 gets twice the data of a 2TB OSD with weight 2.0
- This assumes weights are set correctly
- Mismatched weights cause uneven utilization
Can I modify the CRUSH map from this page?
Can I modify the CRUSH map from this page?
This page is read-only for viewing the topology. To modify CRUSH:
- Use CRUSH rules in Pool configuration
- Add/remove OSDs through the OSDs page
- Use Ceph CLI for advanced CRUSH editing
- Move hosts between buckets via CLI
What does the tree show when expanding a bucket?
What does the tree show when expanding a bucket?
Expanding a bucket reveals its children:
- Root shows datacenters, rooms, racks, or hosts
- Host shows all OSDs on that server
- The hierarchy depth varies by cluster configuration
- Each level shows aggregate weight of children
Why would an OSD be 'up' but 'out'?
Why would an OSD be 'up' but 'out'?
An OSD can be up/out when:
- Manually marked out for maintenance
- Being drained before removal
- Reweight set to 0
- Preparing for disk replacement
How do device classes affect data placement?
How do device classes affect data placement?
Device classes (HDD, SSD, NVMe) enable class-specific placement:
- Pools can target specific device classes
- Enables SSD-only pools for high-performance workloads
- HDD pools for bulk storage
- Mixed configurations are possible with CRUSH rules