Ceph

Ceph: CRUSH map for rack aware

Here I will discuss the default ceph’s CRUSH map – getting, decompile, edit for rack aware and do validate the same before really applying to the Ceph cluster.

Get the current CRUSH map:

sudo ceph osd getcrushmap -o crushmap

Decompile the current CRUSH map:

sudo crushtool  -d crushmap -o crushmap.txt

The crushmap looks below:

  • cat crushmap.txt 

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host cephtest {
id -2 # do not change unnecessarily
# weight 0.180
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.060
item osd.1 weight 0.060
item osd.2 weight 0.060
}
root rack {
id -1 # do not change unnecessarily
# weight 0.180
alg straw
hash 0 # rjenkins1
item cephtest weight 0.180
}

  • # rules
    rule replicated_ruleset {
    ruleset 0
    type replicated
    min_size 1
    max_size 10
    step take rack
    step choose firstn 0 type osd
    step emit
    }

 

# end crush map

How to edit the CRUSH map and validate the same using the crushtools

Note: Applying the new CRUSH map will cause the re-balancing of data, so this should be a potentially lengthy performance impact (depending on the size of cluster data).

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s