Uncategorized

Openstack: Quota details for all Tenants

How to get the quota details from a Openstack cloud?

Openstack cloud has default quota numbers per tenant for  vCPCs, RAM, instances, floating IPs, Volumes, snapshots and GigaBytes, etc. And at a point of time, we need see how much the quota was used per tenant.  Just run the below script and get the output as comma separated file.

Prerequisites:

  1. Setup the openstack “admin” user credentials.

  > source openrc

 

Find the scripts @ my github location as below:

quota_info.sh

Run quota_info.sh

sh quota_info.sh > quota_details.cvs

 

Sample output:

Sample ouput looks as below:

TenantName,,RAM(MBs),,,Cores,,,FloatingIps,,,Instances,,,Volumes,,,Gigabytes,,,Snapshots,,,Max,Used,Util %,Max,Used,Util %,Max,Used,Util %,Max,Used,Util%,Max,Used,Util %,Max,Used,Util %,Max,Used,Util %
ABC1,1024ABC1000,105728,10,200,57,28,100,0,0,500,26,5,100,10,10,10000,462,4,100,4,4
ABC2 ,51200,0,0,100,0,0,100,0,0,100,0,0,10,0,0,1000,0,0,10,0,0
ABC3 ,51200,28736,56,100,15,15,100,0,0,100,11,11,20,13,65,1000,192,19,10,0,0

 

 

Advertisements
Ceph

Ceph-How to add the SSD journal

How to add SSD for Ceph OSD journal

Here I will be discussing to add SSD for OSD journal.

Prerequisites:

– Ceph cluster should be health OK” state
– All placement groups (PGs) should be “active + clean”
– Set ceph osd noout to stop the rebalancing activity.

Steps: (Don’t do these below steps in parallel)

Step#1 Set noout – so that no data migration start

ceph osd set noout

Step#2 Stop the one OSD daemon
/etc/init.d/ceph stop osd.2

Step#3 Wait a little bit time to finish the stop and flush the cache IMPORTANT!!
ceph-osd -i 1 –flush-journal

Step#4 Umount for test only, that no process use the osd
umount /var/lib/ceph/osd/ceph-2

Step#5 create partitions like this way

  parted /dev/SSD
  mkpart journal-2 1 15G
  mkpart journal-3 15G 30G
  mkpart journal-4 30G 45G

Ste#6 Change “ceph.conf” so that the new journal-path is used

[osd.2]
host = ceph-osd-01
public_addr = 192.16.2.2
cluster_addr = 192.16.3.2
osd_journal = /dev/sdx2/journal-2

Step#7 Initialize the journal

ceph-osd -i 2 --mkjournal
Step#8 Start osd daemon

/etc/init.d/ceph start osd.2

Step#9 Follow the Step#2 to Step#8 for all journals and update the below entry in ceph.conf as global entry:

osd_journal = /dev/disk/by-partlabel/journal-$id
ex:

osd_journal = /dev/sdx1/journal-1

 

Ceph

Ceph-How to increase OSD Journal size

How to increase Ceph OSD journal size

Here I will be discussing to increase the journal size from 2GB to 10GB. Current journal size is set as “2GB” and will see the process of how to increase the journal size to “10GB”.

Prerequisites:

– Ceph cluster should be health OK” state
– All placement groups (PGs) should be “active + clean”
– Set ceph osd noout to stop the rebalancing activity.

Steps: (Don’t do these below steps in parallel)

1 – Update the ceph.conf file with desired journal size.

  Change the "osd_journal_size = 10000" in ceph.conf file

2. set  noout – so that no data migration start

ceph osd set noout

3 Stop the OSD, on which you want to  increase the journal size.

 sudo stop ceph-osd id=osd_id
 (or)
 /etc/init.d/ceph stop osd.id
 For ex:
 sudo stop ceph-osd id=1
 (or)
 /etc/init.d/ceph stop osd.1

4. Wait a little bit to finish the stop and flush the cache IMPORTANT!!

ceph-osd -i <osd_id>  --flush-journal
ceph-osd -i 1  --flush-journal

5. Go to /var/lib/ceph/osd/ceph-<osd-id>

cd /var/lib/ceph/osd/ceph-osd.1

6. Delete the journal file.

rm journal

7. Now create a new journal file as below:

sudo ceph-osd --mkjournal -i <osd id>

Note: Check at the size of journal size, it should have the new journal size.

8. Now start the  OSD.

sudo start ceph-osd id=<isd id>
  (or)
/etc/init.d/ceph start osd.< id>

9. Validate – if the new journal size is used.

ceph --admin-daemon /var/run/ceph/ceph-osd.<osd-id>.asok config get osd_journal_size
ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config get osd_journal_size

10. Make sure, ceph cluster is back to Health OK state and all PGs are active + clean.

11. Run the step#1 to step#10 on all OSD daemons, one by one.

12. Finally unset the noout flag.

    sudo ceph osd unset noout

NOTE:
  – Make sure, you above process on one OSD at a time.
 – Cluster should be always in OK state.
 – All PGs should be active + clean.