From b83c6c7ef57bf1e50df8707610be4f7a2ae7e63e Mon Sep 17 00:00:00 2001
From: vexxhost-bot <105816074+vexxhost-bot@users.noreply.github.com>
Date: Mon, 9 Sep 2024 13:21:21 -0400
Subject: [PATCH] [stable/2023.1] Add docs for Ceph (#1869)
This is an automated cherry-pick of #1866
/assign mnaser
---
doc/source/admin/ceph.rst | 77 ++++++++++++++++++++++++++++++++++++++
doc/source/admin/index.rst | 3 +-
2 files changed, 79 insertions(+), 1 deletion(-)
create mode 100644 doc/source/admin/ceph.rst
diff --git a/doc/source/admin/ceph.rst b/doc/source/admin/ceph.rst
new file mode 100644
index 000000000..4c1e12051
--- /dev/null
+++ b/doc/source/admin/ceph.rst
@@ -0,0 +1,77 @@
+##########
+Ceph Guide
+##########
+
+***************************************
+Placement Groups (PGs) and Auto-scaling
+***************************************
+
+In Ceph, Placement Groups (PGs) are an important abstraction that help
+distribute objects across the cluster. Each PG can be thought of as a logical
+collection of objects and Ceph uses these PGs to assign data to the appropriate
+OSDs (Object Storage Daemons). The proper management of PGs is critical to
+ensure the health and performance of your Ceph cluster.
+
+The number of PGs must be carefully configured depending on the size and layout
+of your cluster. The cluster performance can be negatively impacted if you
+have too many or too few placement groups.
+
+To learn more about placement groups and their role in Ceph, refer to the
+`placement groups `_
+documentation from the Ceph project.
+
+The primary recommendations for a Ceph cluster is the following:
+
+- Enable placement group auto-scaling
+- Enable the Ceph balancer module to ensure data is evenly distributed across OSDs
+
+The following sections provide guidance on how to enable these features in your
+Ceph cluster.
+
+Enabling PG Auto-scaling
+========================
+
+Ceph provides a built-in placement group auto-scaling module, which can
+dynamically adjust the number of PGs based on cluster utilization. This is
+particularly useful as it reduces the need for manual intervention when
+scaling your cluster up or down.
+
+To enable PG auto-scaling, execute the following command in your Ceph cluster:
+
+.. code-block:: console
+
+ $ ceph mgr module enable pg_autoscaler
+
+You can configure auto-scaling to be on a per-pool basis by setting the target
+size or percentage of the cluster you want a pool to occupy. For example,
+to enable auto-scaling for a specific pool:
+
+.. code-block:: console
+
+ $ ceph osd pool set pg_autoscale_mode on
+
+For more detailed instructions, refer to the `Autoscaling Placement Groups `_
+documentation from the Ceph project.
+
+Managing the Ceph Balancer
+==========================
+
+The Ceph Balancer tool helps redistribute data across OSDs in order to maintain
+an even distribution of data in the cluster. This is especially important as
+the cluster grows, new OSDs are added, or during recovery operations.
+
+To enable the balancer, run:
+
+.. code-block:: console
+
+ $ ceph balancer on
+
+You can check the current balancer status using:
+
+.. code-block:: console
+
+ $ ceph balancer status
+
+For a more in-depth look at how the balancer works and how to configure it,
+refer to the `Balancer module `_
+documentation from the Ceph project.
diff --git a/doc/source/admin/index.rst b/doc/source/admin/index.rst
index 87d5546e9..2232882b6 100644
--- a/doc/source/admin/index.rst
+++ b/doc/source/admin/index.rst
@@ -13,7 +13,8 @@ information to ensure stable and efficient operation of the system.
.. toctree::
:maxdepth: 2
+ ceph
integration
- monitoring
maintenance
+ monitoring
troubleshooting