Skip to content

Commit

Permalink
[stable/2023.1] Add docs for Ceph (#1869)
Browse files Browse the repository at this point in the history
This is an automated cherry-pick of #1866
/assign mnaser
  • Loading branch information
vexxhost-bot authored Sep 9, 2024
1 parent c444f6a commit b83c6c7
Show file tree
Hide file tree
Showing 2 changed files with 79 additions and 1 deletion.
77 changes: 77 additions & 0 deletions doc/source/admin/ceph.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
##########
Ceph Guide
##########

***************************************
Placement Groups (PGs) and Auto-scaling
***************************************

In Ceph, Placement Groups (PGs) are an important abstraction that help
distribute objects across the cluster. Each PG can be thought of as a logical
collection of objects and Ceph uses these PGs to assign data to the appropriate
OSDs (Object Storage Daemons). The proper management of PGs is critical to
ensure the health and performance of your Ceph cluster.

The number of PGs must be carefully configured depending on the size and layout
of your cluster. The cluster performance can be negatively impacted if you
have too many or too few placement groups.

To learn more about placement groups and their role in Ceph, refer to the
`placement groups <https://docs.ceph.com/en/latest/rados/operations/placement-groups/>`_
documentation from the Ceph project.

The primary recommendations for a Ceph cluster is the following:

- Enable placement group auto-scaling
- Enable the Ceph balancer module to ensure data is evenly distributed across OSDs

The following sections provide guidance on how to enable these features in your
Ceph cluster.

Enabling PG Auto-scaling
========================

Ceph provides a built-in placement group auto-scaling module, which can
dynamically adjust the number of PGs based on cluster utilization. This is
particularly useful as it reduces the need for manual intervention when
scaling your cluster up or down.

To enable PG auto-scaling, execute the following command in your Ceph cluster:

.. code-block:: console
$ ceph mgr module enable pg_autoscaler
You can configure auto-scaling to be on a per-pool basis by setting the target
size or percentage of the cluster you want a pool to occupy. For example,
to enable auto-scaling for a specific pool:

.. code-block:: console
$ ceph osd pool set <pool_name> pg_autoscale_mode on
For more detailed instructions, refer to the `Autoscaling Placement Groups <https://docs.ceph.com/en/reef/rados/operations/placement-groups/#autoscaling-placement-groups>`_
documentation from the Ceph project.

Managing the Ceph Balancer
==========================

The Ceph Balancer tool helps redistribute data across OSDs in order to maintain
an even distribution of data in the cluster. This is especially important as
the cluster grows, new OSDs are added, or during recovery operations.

To enable the balancer, run:

.. code-block:: console
$ ceph balancer on
You can check the current balancer status using:

.. code-block:: console
$ ceph balancer status
For a more in-depth look at how the balancer works and how to configure it,
refer to the `Balancer module <https://docs.ceph.com/en/latest/rados/operations/balancer/>`_
documentation from the Ceph project.
3 changes: 2 additions & 1 deletion doc/source/admin/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ information to ensure stable and efficient operation of the system.
.. toctree::
:maxdepth: 2

ceph
integration
monitoring
maintenance
monitoring
troubleshooting

0 comments on commit b83c6c7

Please sign in to comment.