- List pools:
ceph osd lspools
- Create a rados pool (replicated):
ceph osd pool create ${pool_name} 128 128 replicated
- Repl pool usabble storage capacity:
1/replicas %
- Get erasure coded profiles:
ceph osd erasure-code-profile ls
- Create an EC profile:
ceph osd erasure-code-profile set test_ec_profile k=4 m=2 plugin=jerasure technique=reed_sol_van
- Get specific erasure coded profile info:
ceph osd erasure-code-profile get ec-profile-cinder-ceph
- Create an EC pool:
ceph osd pool create ecpool 128 128 erasure test_ec_profile
- EC pool usabble storage capacity:
K/(K + M) %
- List pool detail:
ceph osd pool ls detail
- Eanble application for a pool:
ceph osd pool application enable <poolname> <application_name>`
- Enable overwrite for ec pool:
ceph osd pool set <ec_poolname> allow_ec_overwrites true
- Set/change
pg_num
:ceph osd pool set data pg_num 256
- Set/change
pgp_num
:ceph osd pool set data pgp_num 256
- Status/start/stop an OSD:
systemctl [status|start|stop] osd@NUM
- Change the loglevel of OSDs:
juju ssh ceph-mon/0 'sudo ceph tell osd.* injectargs "--debug_osd 1/5"'
- Dump OSD perf stats:
ceph daemon osd.$i perf dump
- Reset perf counter for an OSD:
ceph daemon osd.${OSD} perf reset all
- Dump perf stats of an OSD:
ceph daemon osd.${OSD} perf dump -f json
- Get OSD of the _current_ OSD unit (OSD num could be different than the juju unit num):
mount | grep /var/lib/ceph/osd/ceph- | awk '{split($3, a, "-"); print a[2]}'
- Get OSD number (run on a monitor node):
ceph osd tree
- Dump memory usage of an OSD:
ceph daemon osd.${OSD} dump_mempools
- To check the current value of an OSD_param:
ceph daemon osd.{OSD} config show | grep OSD_param
orceph daemon osd.{id} config get OSD_param
- Quicken an OSD recovery:
juju ssh ceph-mon/{MON} 'sudo ceph tell 'osd.*' injectargs '--osd_max_backfills 10 --osd_recovery_max_active 10 --osd_recovery_op_priority 63 --osd_recovery_sleep 0 --osd_recovery_sleep_hdd 0'
- Restore OSD recovery quicken params back to default:
juju ssh ceph-mon/${MON} 'sudo ceph tell 'osd.*' injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3 --osd_recovery_sleep 0 --osd_recovery_sleep_hdd 0.1'
- Get OSD device type:
ceph osd metadata 5
- Get an OSD's lvm:
ceph-volume lvm list
- Change all OSDs' OSD_param:
ceph tell osd.* config set OSD_param {value}
- Get OSD versions (from a monitor):
ceph report | jq '.osd_metadata | .[] | .ceph_version'
ceph tell osd.* version
ceph osd versions
- Mount OSD to view as files:
ceph-objectstore-tool --op fuse --data-path /var/lib/ceph/osd/ceph-NUM --mountpoint /mnt/osd-NUM
- Change all monitor's MON_param:
ceph tell mon.* config set MON_param {value}
- List all the pools:
rados lspools
- Enable a module (balancer here):
ceph mgr module enable balancer
- List modules:
ceph mgr module ls
- Enalbe balancer:
ceph osd set-require-min-compat-client luminous
ceph mgr module enable balancer
ceph balancer mode upmap
ceph balancer on
Benchmark the cluster
(Can also create a new pool and use it to benchmark:
ceph osd pool create ${pool_name} 100 100
)- write:
rados bench -p ${pool_name} 10 write --cleanup
- seq read:
rados bench -p ${pool_name} 10 seq
- rando read:
rados bench -p ${pool_name} 10 rand
- write:
- List all crashes:
ceph crash ls
- Details of a specific crash:
ceph crash info <crash-id>
- Clear a crash:
ceph crash archive <crash-id>
- Clear all:
ceph crash archive-all
- Remove specific crash:
ceph crash rm <crash-id>
- Crash summary:
ceph crash stat
- Get all PGs:
ceph pg dump
- Get all PGs & state:
ceph osd health detail
- Query a PG:
ceph pg <pg-id> query
- Get an object's PG:
ceph osd map <pool-id> <object-id>
- PG stat:
ceph pg stat
- PG balancing: use balancer for Luminous or newer versions.
- Read an object directly:
rados --pool test_pool get object_name -
- List objects from a pool:
rados -p pool_name ls
- List block devices in
rbd
pool:rbd ls
- List block devices in
<pool_name>
:rbd ls <pool_name>
- Initialize a pool:
rbd pool init <pool_name>
- Create a block device image:
rbd create --size <MBs> <pool-name>/<image-name>
- Get rbd image info:
rbd info <pool_name>/<image_name>
- Remove a rbd block device:
rbd rm <pool_name>/<image_name>
- Create ceph fs:
ceph fs create fs_name meta_repl_pool_name data_pool_name
- CephFS status:
ceph fs status
- List users:
ceph auth ls
- Get auth info user:
ceph auth get <TYPE.ID>
e.g.ceph auth get client.admin
- Delete a user:
ceph auth del <TYPE.ID>
e.g.ceph auth del client.admin
- Display auth key:
ceph auth print-key <TYPE.ID>
More (Not shown/used above. Idea is to show at least usage for each command and then remove from below)
ceph-authtool
ceph-bluestore-tool
ceph-client-debug
ceph-conf
ceph-coverage
ceph-crash
ceph-debugpack
ceph-dedup-tool
ceph-dencoder
ceph-diff-sorted
ceph-erasure-code-tool
ceph-fuse
ceph-immutable-object-cache
ceph-kvstore-tool
ceph-mds
ceph-mgr
ceph-mon
ceph-monstore-tool
ceph-objectstore-tool
ceph-osd
ceph-osdomap-tool
ceph-post-file
ceph-syn
ceph_bench_log
ceph_erasure_code_benchmark
ceph_erasure_code_non_regression
ceph_kvstorebench
ceph_multi_stress_watch
ceph_objectstore_bench
ceph_omapbench
ceph_perf_local
ceph_perf_msgr_client
ceph_perf_msgr_server
ceph_perf_objectstore
ceph_radosacl
ceph_rgw_jsonparser
ceph_rgw_multiparser
ceph_scratchtool
ceph_scratchtoolpp
cephfs-data-scan
cephfs-journal-tool
cephfs-meta-injection
cephfs-table-tool
crushtool
get_command_descriptions
init-ceph
librados-config
monmaptool
mount.ceph
neorados
osdmaptool
radosgw-admin
radosgw-es
radosgw-object-expirer
radosgw-token
radosgw
rbd-fuse
rbd-mirror
rbd-nbd
rbd-replay-prep
rbd-replay