Cinder

Cinder backup service enabling

Here are the quick steps to enable the cinder backup services for ceph and swift storage backends.

For ceph storage as backup back-end:

To enable the Ceph backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0

For SWIFT storage as backup back-end:

To enable the SWIFT backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.swift
backup_swift_url = http://localhost:8080/v1/AUTH_
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib
backup_swift_auth = per_user
backup_swift_user = <None>
backup_swift_key = <None>

 

Advertisements
Ceph

HEALTH_WARN pool data has too few pgs

My ceph cluster giving the healh warning with pool data has few pgs.

 $ ceph health
 HELATH_WARN pool data has too few pgs

 

Quick fix the above warn is – increase the max object skew from the default value as 10 to 20 or so.

For ex:

mon_pg_warn_max_object_skew = 20

Inject the above configuration option to ceph mons as shown below:

ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 20'

 

 

Ceph

Diff ‘ceph osd reweight’ and ‘ceph osd crush reweight’

The below email thread explained me the difference is between "crush reweight" 
and reweight":
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html

 ceph osd reweight 
 ceph osd crush reweight 

> ceph osd crush reweight 
sets the CRUSH weight of the OSD. This weight is an arbitrary value (generally 
the size of the disk in TB or something) and controls how much data the 
system tries to allocate to the OSD.

This "ceph osd reweight" will only change the weight of the OSD and not the 
weight of the host, so that will cause slightly less movement of data and 
it will only migrate within that host. 
> ceph osd reweight 
 sets an override weight on the OSD. This value is in the range 0 to 1, and 
forces CRUSH to re-place (1-weight) of the data that would otherwise live on 
this drive. It does *not* change the weights assigned to the buckets above the
OSD, and is a corrective measure in case the normal CRUSH distribution 
isn't working out quite right. 
(For instance, if one of your OSDs is at 90% and the others are at 50%, you could reduce this weight to try and compensate for it.

This 'ceph osd reweight' is not a persistent setting. When an OSD gets marked out, the osd weight will be set to 0. When it gets marked in again, the weight will be changed to 1. Because of this 'ceph osd reweight' is a temporary solution. 
You should only use it to keep your cluster running while you're ordering
more hardware.

The "ceph osd crush reweight" will only change the weight of the OSD and not 
the weight of the host, so that will cause slightly less movement of data and 
it will only migrate within that host.

For ex:

$ ceph osd crush reweight osd.3 0.95   // Will change the node weight in CRUSHMAP
reweighted item id 3 name 'osd.3' to 0.95 in crush map

$ ceph osd tree | grep osd.3
15  0.95                osd.3  up  1



$ ceph osd reweight 3 0.9 // Will NOT change the node weight in CRUSHMAP and 
                              DATA movement in the node only
 Reweighted osd.3 to 0.9 (1) 

$ ceph osd tree | grep osd.3 
15 0.95 osd.3 up 0.9
NOTE: "ceph osd reweight" command only change the weight of the OSD and not 
the weight of the host, so that will cause slightly less movement of data and 
it will only do the data migration within that specific host/node.
Ceph

Replace faulty HDD without downtime

Its common an HDD goes down due to hardware failures in the ceph storage cluster. In this case, hardware vendors may ask for downtime to replace the faulty HDD. But we can manage the replace without downtime of ceph cluster. Details are below with the an example of HP HDD.

 

<<inprogress>

Ceph, Glance

Glance: Upload images directly in ceph

Currently uploading glance images upload takes long time, if the images are big in size (like > 2GB or so). Here ceph provides a way to upload the images using “rbd import” command and then map the same image name to glance. This process will be faster and easier.

Step#1 
First upload an image to ceph, Here will explain using the Window-OS image with raw format.
image_id=$(uuidgen)
rbd  import /home/swami/windows-r2.raw $imae_id -p images --image-format 2
rbd info imabes/$image-id => Will show the image details.
Step#2:

Now, create a snapshot and make it protected to be compatible with glance images.
snap_id=$(uuidgen)
rbd -p images snap create --snap snap $snap_id 
rbd -p images smap protect --image $snap_id --snap snap
Step#3:  
Now, add the above image to glance.
Note: source the openrc for a tenant, where you want to add this image.
glance image-create $snap_id --name windows-r2.raw --store rbd --disk-formte raw --container=foramte bare --location rbd://$(ceph fsid)/images/$snap_id/snap
Step#4:
Check if the glance image list shows the above image:
glance image-list | grep $snap_id
glance image-show $snap_id