Uncategorized

Ceph: How to add the SSD journal with dm-crypt enabled

Here am sharing the steps for adding a journal with dm-crypt enabled.

Steps:

Create a key using the journal partition’s uuid das below;

dd  bs=key-size count=1 if=/dev/urandom of=/etc/ceph/dmcrypt-keys/<journal_partition uuid>

For ex:

  dd bs=512 count=1 if=/dev/urnadom of=/etc/ceph/dmcrypt-keys/uuid
How to find the jounral partition uuid:
 #ls -l /dev/disk/by-partuuid/

Now, create use the below command to do dm-crypted partition:

  1. cryptsetup -v –cipher aes-xts-plain64 –key-size 512 –key-file  /etc/ceph/dmcrypt-keys/uuid luksFomat  <journal partition>
cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --key-file  /etc/ceph/dmcrypt-keys/uuid luksFomat  /dev/sdc5

2.  cryptsetup -v open –type luks <jounral partition> <uuid>

#cryptsetup -v open --type liks /dev/sdc5   uuid

Now, stop the OSD and flush the current journal.

#systemctl stop ceph-osd@<id>
#ceph-osd --flush-journal -i <id>

Now, go to /var/lib/ceh/osd/ceph-<id>/   and move the current journal and journal_dmcrypt to old as below:

# mv jounral jounral.old
# mv journal_dmcrypt jounral_dmcrypt.old

Now create soft links for the newly created journal and jounral_dmcrypt files as below:

# ln -s /dev/disk/by-partuuid/<uuid> ./journal_dmcrypt
# ln -s /dev/mapper/<uuid>  ./jounral

NOTE: the above files, permission may be required to change with ceph:ceph as below:

 # chown ceph:ceph journal jounral_dmcrypt

Now, create osd journal as below:

# ceph-osd --mkjournal -i <id>

Now, start the osd as below:

#systemclt start ceph-osd@<id>

OSD should be up and in with new journal.

 

 

Advertisements
Uncategorized

Ceph: Increase OSD start timeout

For encrypted OSDs, need to increase OSD start timeout from default value “300” to “900”, to start the OSD. How can we do this? just follow the below 2 steps, which can be achieved:

#cp /lib/systemd/system/ceph-disk@.service /etc/systemd/system/
#sed -i "s/CEPH_DISK_TIMEOUT=300/CEPH_DISK_TIMEOUT=900/" /etc/systemd/system/ceph-disk@.service

Now restart the OSDs:

# systemctl restart ceph-osd\*.service ceph-osd.target
Uncategorized

Ceph:How to remove objects from pool

How can we remove the objects from a pool, without removing the pool.
We can be “rados -p cleanup –prefix ” to remove all the objects,  with a specific prefix.

 

First check all the objects in that pool, use the below command:

 

$ rados -p  ls
For example, If you wanted to clean up  the ‘rados bench write’ testing objects, you can use the below command for the same:
$ rados -p  --prefix cleanu benchmark’
$ rados -p rbdbench  cleanup --prefix benchmark // will remove all objects prefixed with benchmake

You can also remove all the objects from a pool as below, but note that, the below command will delete all the objects in that pool, so be careful to use the below command. 

$ for i in `rados -p  ls; do echo $i; rados -p  rm $i; done’
Uncategorized

Ceph: Data scrubbing?

Data scrubbing is:

   - An error checking and correction method or 
   - Routine check to ensure that the data on file systems are in pristine condition, and has no errors.

In ceph data integrity is of primary concern, because the huge amounts of data being read and written daily.

A simple example for a scrubbing, is a file system check done on file systems with tools like ‘e2fsck’ in EXT2/3/4, or ‘xfs_repair’ in XFS.

Ceph also includes a daily scrubbing  as well as weekly scrubbing (which is called as deep-scrubbing).

NOTE: Btrfs is one of the file systems that can schedule an internal scrubbing automatically, to ensure that corruptions are detected and preventive measures taken automatically. Since Btrfs can maintain multiple copies of data, once it finds an error in the primary copy, it can check for a good copy (if mirroring is used) and replace it.

Uncategorized

Openstack: Quota details for all Tenants

How to get the quota details from a Openstack cloud?

Openstack cloud has default quota numbers per tenant for  vCPCs, RAM, instances, floating IPs, Volumes, snapshots and GigaBytes, etc. And at a point of time, we need see how much the quota was used per tenant.  Just run the below script and get the output as comma separated file.

Prerequisites:

  1. Setup the openstack “admin” user credentials.

  > source openrc

 

Find the scripts @ my github location as below:

quota_info.sh

Run quota_info.sh

sh quota_info.sh > quota_details.cvs

 

Sample output:

Sample ouput looks as below:

TenantName,,RAM(MBs),,,Cores,,,FloatingIps,,,Instances,,,Volumes,,,Gigabytes,,,Snapshots,,,Max,Used,Util %,Max,Used,Util %,Max,Used,Util %,Max,Used,Util%,Max,Used,Util %,Max,Used,Util %,Max,Used,Util %
ABC1,1024ABC1000,105728,10,200,57,28,100,0,0,500,26,5,100,10,10,10000,462,4,100,4,4
ABC2 ,51200,0,0,100,0,0,100,0,0,100,0,0,10,0,0,1000,0,0,10,0,0
ABC3 ,51200,28736,56,100,15,15,100,0,0,100,11,11,20,13,65,1000,192,19,10,0,0