Skip to content

Commit

Permalink
Merge pull request ceph#55899 from zdover23/wip-doc-2024-03-02-rados-…
Browse files Browse the repository at this point in the history
…radosgw-pgcalc

doc/rados: remove PGcalc from docs

Reviewed-by: Ronen Friedman <[email protected]>
  • Loading branch information
zdover23 authored Mar 3, 2024
2 parents 3e302ab + ccb851d commit 7280a24
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 25 deletions.
4 changes: 0 additions & 4 deletions doc/rados/operations/placement-groups.rst
Original file line number Diff line number Diff line change
Expand Up @@ -641,9 +641,6 @@ pools, each with 512 PGs on 10 OSDs, the OSDs will have to handle ~50,000 PGs
each. This cluster will require significantly more resources and significantly
more time for peering.

For determining the optimal number of PGs per OSD, we recommend the `PGCalc`_
tool.


.. _setting the number of placement groups:

Expand Down Expand Up @@ -935,4 +932,3 @@ about it entirely (if it is too new to have a previous version). To mark the

.. _Create a Pool: ../pools#createpool
.. _Mapping PGs to OSDs: ../../../architecture#mapping-pgs-to-osds
.. _pgcalc: https://old.ceph.com/pgcalc/
19 changes: 8 additions & 11 deletions doc/rados/operations/pools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,14 @@ Pools provide:
<../erasure-code>`_, resilience is defined as the number of coding chunks
(for example, ``m = 2`` in the default **erasure code profile**).

- **Placement Groups**: You can set the number of placement groups (PGs) for
the pool. In a typical configuration, the target number of PGs is
approximately one hundred PGs per OSD. This provides reasonable balancing
without consuming excessive computing resources. When setting up multiple
pools, be careful to set an appropriate number of PGs for each pool and for
the cluster as a whole. Each PG belongs to a specific pool: when multiple
pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
in the desired PG-per-OSD target range. To calculate an appropriate number of
PGs for your pools, use the `pgcalc`_ tool.
- **Placement Groups**: The :ref:`autoscaler <pg-autoscaler>` sets the number
of placement groups (PGs) for the pool. In a typical configuration, the
target number of PGs is approximately one-hundred and fifty PGs per OSD. This
provides reasonable balancing without consuming excessive computing
resources. When setting up multiple pools, set an appropriate number of PGs
for each pool and for the cluster as a whole. Each PG belongs to a specific
pool: when multiple pools use the same OSDs, make sure that the **sum** of PG
replicas per OSD is in the desired PG-per-OSD target range.

- **CRUSH Rules**: When data is stored in a pool, the placement of the object
and its replicas (or chunks, in the case of erasure-coded pools) in your
Expand Down Expand Up @@ -735,8 +734,6 @@ Managing pools that are flagged with ``--bulk``
===============================================
See :ref:`managing_bulk_flagged_pools`.


.. _pgcalc: https://old.ceph.com/pgcalc/
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
Expand Down
17 changes: 7 additions & 10 deletions doc/radosgw/pools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,13 @@ multiple zones.
Tuning
======

When ``radosgw`` first tries to operate on a zone pool that does not
exist, it will create that pool with the default values from
``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
are sufficient for some pools, but others (especially those listed in
``placement_pools`` for the bucket index and data) will require additional
tuning. We recommend using the `Ceph Placement Group’s per Pool
Calculator <https://old.ceph.com/pgcalc/>`__ to calculate a suitable number of
placement groups for these pools. See
`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
for details on pool creation.
When ``radosgw`` first tries to operate on a zone pool that does not exist, it
will create that pool with the default values from ``osd pool default pg num``
and ``osd pool default pgp num``. These defaults are sufficient for some pools,
but others (especially those listed in ``placement_pools`` for the bucket index
and data) will require additional tuning. See `Pools
<http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__ for details
on pool creation.

.. _radosgw-pool-namespaces:

Expand Down

0 comments on commit 7280a24

Please sign in to comment.