WebSep 6, 2012 · ceph osd getcrushmap -o /tmp/mycrushmap ... { id -2 # do not change unnecessarily # weight 2.000 alg straw hash 0 # rjenkins1 item osd.1 weight 1.000 item osd.0 weight 1.000 } host x.y.z.138 { id -4 # do not change unnecessarily # weight 1.000 alg straw hash 0 # rjenkins1 item osd.2 weight 1.000 } rack rack-1 { id -3 # do not … WebFeb 22, 2024 · The total number of copies to store for each piece of data is determined by the ceph osd_pool_default_size ... {id-5 # do not change unnecessarily id-6 class hdd # do not change unnecessarily # weight 13.080 alg straw2 hash 0 # rjenkins1 item osd.12 weight 1.090 item osd.13 weight 1.090 item osd.14 weight 1.090 item osd.15 weight …
Ceph osd reweight — CephNotes
WebI found this way to change realtime config (without editing any file) but its saying change may require restart, i don't know what restart and that is what i am trying to avoid. [root@ceph-mon-01 ~]# ceph tell osd.* injectargs '--osd_crush_initial_weight 0' osd.0: osd_crush_initial_weight = '0.000000' (not observed, change may require restart ... Web[global] # By default, Ceph makes 3 replicas of RADOS objects. If you want to maintain four # copies of an object the default value--a primary copy and three replica # copies--reset the default values as shown in 'osd_pool_default_size'. # If you want to allow Ceph to accept an I/O operation to a degraded PG, # set 'osd_pool_default_min_size' to a number less … mga windshield glass
ceph/add-or-rm-osds.rst at main · ceph/ceph · GitHub
Webroot default { id -1 # do not change unnecessarily id -2 class hdd # do not change unnecessarily # weight 0.000 alg straw2 hash 0 # rjenkins1 } host osd01 { id -3 # do not change unnecessarily id -4 class hdd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.1 weight 1.000 item osd.2 ... Webceph osd reweight {osd-num} {weight} Reweights all the OSDs by reducing the weight of OSDs which are heavily overused. By default it will adjust the weights downward on OSDs which have 120% of the average utilization, but if you include threshold it will use that percentage instead. ceph osd reweight-by-utilization [threshold] Webceph orch apply osd --all-available-devices After running the above command: If you add new disks to the cluster, they will automatically be used to create new OSDs. If you remove an OSD and clean the LVM physical volume, a new OSD will be created automatically. mgaylican accounting services