diff --git a/doc/source/cluster/getting-started.rst b/doc/source/cluster/getting-started.rst index 8bcf33c082b4..df6fae05b1bb 100644 --- a/doc/source/cluster/getting-started.rst +++ b/doc/source/cluster/getting-started.rst @@ -3,6 +3,20 @@ Ray Clusters Overview ===================== +.. toctree:: + :hidden: + + Key Concepts + Deploying on Kubernetes + Deploying on VMs + metrics + configure-manage-dashboard + Applications Guide + faq + package-overview + usage-stats + + Ray enables seamless scaling of workloads from a laptop to a large cluster. While Ray works out of the box on single machines with just a call to ``ray.init``, to run Ray applications on multiple nodes you must first *deploy a Ray cluster*. diff --git a/doc/source/cluster/kubernetes/benchmarks.md b/doc/source/cluster/kubernetes/benchmarks.md index 0bf5221971f6..53bd18e4dea4 100644 --- a/doc/source/cluster/kubernetes/benchmarks.md +++ b/doc/source/cluster/kubernetes/benchmarks.md @@ -2,4 +2,10 @@ # KubeRay Benchmarks -- {ref}`kuberay-mem-scalability` \ No newline at end of file +```{toctree} +:hidden: + +benchmarks/memory-scalability-benchmark +``` + +- {ref}`kuberay-mem-scalability` diff --git a/doc/source/cluster/kubernetes/examples.md b/doc/source/cluster/kubernetes/examples.md index cf37b9ca9623..cd97848d4388 100644 --- a/doc/source/cluster/kubernetes/examples.md +++ b/doc/source/cluster/kubernetes/examples.md @@ -2,6 +2,18 @@ # Examples +```{toctree} +:hidden: + +examples/ml-example +examples/gpu-training-example +examples/stable-diffusion-rayservice +examples/mobilenet-rayservice +examples/text-summarizer-rayservice +examples/rayjob-batch-inference-example +``` + + This section presents example Ray workloads to try out on your Kubernetes cluster. - {ref}`kuberay-ml-example` (CPU-only) diff --git a/doc/source/cluster/kubernetes/getting-started.md b/doc/source/cluster/kubernetes/getting-started.md index 13a936d2637d..cc52e66c8f82 100644 --- a/doc/source/cluster/kubernetes/getting-started.md +++ b/doc/source/cluster/kubernetes/getting-started.md @@ -2,6 +2,15 @@ # Getting Started with KubeRay +```{toctree} +:hidden: + +getting-started/raycluster-quick-start +getting-started/rayjob-quick-start +getting-started/rayservice-quick-start +``` + + ## Custom Resource Definitions (CRDs) [KubeRay](https://github.com/ray-project/kuberay) is a powerful, open-source Kubernetes operator that simplifies the deployment and management of Ray applications on Kubernetes. diff --git a/doc/source/cluster/kubernetes/index.md b/doc/source/cluster/kubernetes/index.md index 451575bfb68a..1fd18c084ebe 100644 --- a/doc/source/cluster/kubernetes/index.md +++ b/doc/source/cluster/kubernetes/index.md @@ -1,4 +1,17 @@ # Ray on Kubernetes + +```{toctree} +:hidden: + +getting-started +user-guides +examples +k8s-ecosystem +benchmarks +troubleshooting +references +``` + (kuberay-index)= ## Overview @@ -36,14 +49,14 @@ The Ray docs present all the information you need to start running Ray workloads .. grid:: 1 2 2 2 :gutter: 1 :class-container: container pb-3 - + .. grid-item-card:: **Getting Started** ^^^ - + Learn how to start a Ray cluster and deploy Ray applications on Kubernetes. - + +++ .. button-ref:: kuberay-quickstart :color: primary @@ -56,9 +69,9 @@ The Ray docs present all the information you need to start running Ray workloads **User Guides** ^^^ - + Learn best practices for configuring Ray clusters on Kubernetes. - + +++ .. button-ref:: kuberay-guides :color: primary @@ -71,9 +84,9 @@ The Ray docs present all the information you need to start running Ray workloads **Examples** ^^^ - + Try example Ray workloads on Kubernetes. - + +++ .. button-ref:: kuberay-examples :color: primary @@ -86,9 +99,9 @@ The Ray docs present all the information you need to start running Ray workloads **Ecosystem** ^^^ - + Integrate KubeRay with third party Kubernetes ecosystem tools. - + +++ .. button-ref:: kuberay-ecosystem-integration :color: primary @@ -101,9 +114,9 @@ The Ray docs present all the information you need to start running Ray workloads **Benchmarks** ^^^ - + Check the KubeRay benchmark results. - + +++ .. button-ref:: kuberay-benchmarks :color: primary @@ -111,14 +124,14 @@ The Ray docs present all the information you need to start running Ray workloads :expand: Benchmark results - + .. grid-item-card:: **Troubleshooting** ^^^ - + Consult the KubeRay troubleshooting guides. - + +++ .. button-ref:: kuberay-troubleshooting :color: primary diff --git a/doc/source/cluster/kubernetes/k8s-ecosystem.md b/doc/source/cluster/kubernetes/k8s-ecosystem.md index 758a9ae45236..bfb67af72d7d 100644 --- a/doc/source/cluster/kubernetes/k8s-ecosystem.md +++ b/doc/source/cluster/kubernetes/k8s-ecosystem.md @@ -2,6 +2,16 @@ # KubeRay Ecosystem +```{toctree} +:hidden: + +k8s-ecosystem/ingress +k8s-ecosystem/prometheus-grafana +k8s-ecosystem/pyspy +k8s-ecosystem/volcano +k8s-ecosystem/kubeflow +``` + * {ref}`kuberay-ingress` * {ref}`kuberay-prometheus-grafana` * {ref}`kuberay-pyspy-integration` diff --git a/doc/source/cluster/kubernetes/troubleshooting.md b/doc/source/cluster/kubernetes/troubleshooting.md index 5f0d3d82a4b9..5bf2257b44f5 100644 --- a/doc/source/cluster/kubernetes/troubleshooting.md +++ b/doc/source/cluster/kubernetes/troubleshooting.md @@ -2,5 +2,12 @@ # KubeRay Troubleshooting +```{toctree} +:hidden: + +troubleshooting/troubleshooting +troubleshooting/rayservice-troubleshooting +``` + - {ref}`kuberay-troubleshootin-guides` -- {ref}`kuberay-raysvc-troubleshoot` \ No newline at end of file +- {ref}`kuberay-raysvc-troubleshoot` diff --git a/doc/source/cluster/kubernetes/user-guides.md b/doc/source/cluster/kubernetes/user-guides.md index 2b600aeff174..1cf538a7c8ea 100644 --- a/doc/source/cluster/kubernetes/user-guides.md +++ b/doc/source/cluster/kubernetes/user-guides.md @@ -2,6 +2,31 @@ # User Guides +```{toctree} +:hidden: + +Deploy Ray Serve Apps +user-guides/rayservice-high-availability +user-guides/observability +user-guides/upgrade-guide +user-guides/k8s-cluster-setup +user-guides/storage +user-guides/config +user-guides/configuring-autoscaling +user-guides/kuberay-gcs-ft +user-guides/gke-gcs-bucket +user-guides/logging +user-guides/gpu +user-guides/rayserve-dev-doc +user-guides/pod-command +user-guides/pod-security +user-guides/helm-chart-rbac +user-guides/tls +user-guides/k8s-autoscaler +user-guides/static-ray-cluster-without-kuberay +``` + + :::{note} To learn the basics of Ray on Kubernetes, we recommend taking a look at the {ref}`introductory guide ` first. diff --git a/doc/source/cluster/kubernetes/user-guides/k8s-cluster-setup.md b/doc/source/cluster/kubernetes/user-guides/k8s-cluster-setup.md index caa42e633b4f..bc04ae096a69 100644 --- a/doc/source/cluster/kubernetes/user-guides/k8s-cluster-setup.md +++ b/doc/source/cluster/kubernetes/user-guides/k8s-cluster-setup.md @@ -2,6 +2,13 @@ # Managed Kubernetes services +```{toctree} +:hidden: + +aws-eks-gpu-cluster +gcp-gke-gpu-cluster +``` + The KubeRay operator and Ray can run on any cloud or on-prem Kubernetes cluster. The simplest way to provision a remote Kubernetes cluster is to use a cloud-based managed service. We collect a few helpful links for users who are getting started with a managed Kubernetes service. diff --git a/doc/source/cluster/vms/examples/index.md b/doc/source/cluster/vms/examples/index.md index 059be000a548..6426cb41de72 100644 --- a/doc/source/cluster/vms/examples/index.md +++ b/doc/source/cluster/vms/examples/index.md @@ -2,6 +2,12 @@ # Examples +```{toctree} +:hidden: + +ml-example +``` + :::{note} To learn the basics of Ray on Cloud VMs, we recommend taking a look at the {ref}`introductory guide ` first. diff --git a/doc/source/cluster/vms/index.md b/doc/source/cluster/vms/index.md index 79b3017bdc6c..e472a49e782e 100644 --- a/doc/source/cluster/vms/index.md +++ b/doc/source/cluster/vms/index.md @@ -1,6 +1,15 @@ # Ray on Cloud VMs (cloud-vm-index)= +```{toctree} +:hidden: + +getting-started +User Guides +Examples +references/index +``` + ## Overview In this section we cover how to launch Ray clusters on Cloud VMs. Ray ships with built-in support @@ -23,14 +32,14 @@ The Ray docs present all the information you need to start running Ray workloads .. grid:: 1 2 2 2 :gutter: 1 :class-container: container pb-3 - + .. grid-item-card:: - + **Getting Started** ^^^ - + Learn how to start a Ray cluster and deploy Ray applications in the cloud. - + +++ .. button-ref:: vm-cluster-quick-start :color: primary @@ -38,14 +47,14 @@ The Ray docs present all the information you need to start running Ray workloads :expand: Get Started with Ray on Cloud VMs - + .. grid-item-card:: **Examples** ^^^ - + Try example Ray workloads in the Cloud - + +++ .. button-ref:: vm-cluster-examples :color: primary @@ -53,14 +62,14 @@ The Ray docs present all the information you need to start running Ray workloads :expand: Try example workloads - + .. grid-item-card:: **User Guides** ^^^ - + Learn best practices for configuring cloud clusters - + +++ .. button-ref:: vm-cluster-guides :color: primary @@ -68,14 +77,14 @@ The Ray docs present all the information you need to start running Ray workloads :expand: Read the User Guides - + .. grid-item-card:: **API Reference** ^^^ - + Find API references for cloud clusters - + +++ .. button-ref:: vm-cluster-api-references :color: primary diff --git a/doc/source/cluster/vms/user-guides/community/index.rst b/doc/source/cluster/vms/user-guides/community/index.rst index 2c5c25feb9f9..a21c64920cd1 100644 --- a/doc/source/cluster/vms/user-guides/community/index.rst +++ b/doc/source/cluster/vms/user-guides/community/index.rst @@ -3,6 +3,13 @@ Community Supported Cluster Managers ==================================== +.. toctree:: + :hidden: + + yarn + slurm + lsf + .. note:: If you're using AWS, Azure, GCP or vSphere you can use the :ref:`Ray cluster launcher ` to simplify the cluster setup process. diff --git a/doc/source/cluster/vms/user-guides/index.md b/doc/source/cluster/vms/user-guides/index.md index 01bbf04d0827..1c20784335ee 100644 --- a/doc/source/cluster/vms/user-guides/index.md +++ b/doc/source/cluster/vms/user-guides/index.md @@ -2,6 +2,16 @@ # User Guides +```{toctree} +:hidden: + +launching-clusters/index +large-cluster-best-practices +configuring-autoscaling +logging +Community-supported Cluster Managers +``` + :::{note} To learn the basics of Ray on Cloud VMs, we recommend taking a look at the {ref}`introductory guide ` first. diff --git a/doc/source/data/data.rst b/doc/source/data/data.rst index 737298774334..004167a23333 100644 --- a/doc/source/data/data.rst +++ b/doc/source/data/data.rst @@ -4,6 +4,16 @@ Ray Data: Scalable Datasets for ML ================================== +.. toctree:: + :hidden: + + Overview + key-concepts + user-guide + examples/index + api/api + data-internals + Ray Data is a scalable data processing library for ML workloads. It provides flexible and performant APIs for scaling :ref:`Offline batch inference ` and :ref:`Data preprocessing and ingest for ML training `. Ray Data uses `streaming execution `__ to efficiently process large datasets. .. image:: images/dataset.svg diff --git a/doc/source/ray-core/examples/overview.rst b/doc/source/ray-core/examples/overview.rst index 3cefef5bb6b7..0d96fee0b0e4 100644 --- a/doc/source/ray-core/examples/overview.rst +++ b/doc/source/ray-core/examples/overview.rst @@ -1,8 +1,15 @@ .. _ray-core-examples-tutorial: + Ray Tutorials and Examples ========================== +.. toctree:: + :hidden: + :glob: + + * + Machine Learning Examples ------------------------- diff --git a/doc/source/ray-core/walkthrough.rst b/doc/source/ray-core/walkthrough.rst index e10c479171bc..ebffa785d67e 100644 --- a/doc/source/ray-core/walkthrough.rst +++ b/doc/source/ray-core/walkthrough.rst @@ -3,6 +3,16 @@ What is Ray Core? ================= +.. toctree:: + :maxdepth: 1 + :hidden: + + Key Concepts + User Guides + Examples + api/index + + Ray Core provides a small number of core primitives (i.e., tasks, actors, objects) for building and scaling distributed applications. Below we'll walk through simple examples that show you how to turn your functions and classes easily into Ray tasks and actors, and how to work with Ray objects. Getting Started @@ -58,7 +68,7 @@ As seen above, Ray stores task and actor call results in its :ref:`distributed o Next Steps ---------- -.. tip:: To check how your application is doing, you can use the :ref:`Ray dashboard `. +.. tip:: To check how your application is doing, you can use the :ref:`Ray dashboard `. Ray's key primitives are simple, but can be composed together to express almost any kind of distributed computation. Learn more about Ray's :ref:`key concepts ` with the following user guides: diff --git a/doc/source/ray-more-libs/index.rst b/doc/source/ray-more-libs/index.rst index 46b2659facc7..af9f61d1d6f2 100644 --- a/doc/source/ray-more-libs/index.rst +++ b/doc/source/ray-more-libs/index.rst @@ -1,6 +1,19 @@ More Ray ML Libraries ===================== +.. toctree:: + :hidden: + + joblib + multiprocessing + ray-collective + dask-on-ray + raydp + mars-on-ray + modin/index + Ray Workflows (Alpha) <../workflows/index> + + .. TODO: we added the three Ray Core examples below, since they don't really belong there. Going forward, make sure that all "Ray Lightning" and XGBoost topics are in one document or group, and not next to each other. diff --git a/doc/source/ray-observability/index.md b/doc/source/ray-observability/index.md index ddba634d8985..e938ee5aa6a5 100644 --- a/doc/source/ray-observability/index.md +++ b/doc/source/ray-observability/index.md @@ -2,6 +2,15 @@ # Monitoring and Debugging +```{toctree} +:hidden: + +getting-started +key-concepts +User Guides +Reference +``` + This section covers how to **monitor and debug Ray applications and clusters** with Ray's Observability features. @@ -28,4 +37,3 @@ Monitoring and debugging Ray applications consist of 4 major steps: 4. Form a hypothesis, implement a fix, and validate it. The remainder of this section covers the observability tools that Ray provides to accelerate your monitoring and debugging workflow. - diff --git a/doc/source/ray-observability/reference/api.rst b/doc/source/ray-observability/reference/api.rst index db49be3d96b7..0ea155b0d9a4 100644 --- a/doc/source/ray-observability/reference/api.rst +++ b/doc/source/ray-observability/reference/api.rst @@ -23,9 +23,9 @@ Summary APIs :nosignatures: :toctree: doc/ - ray.util.state.summarize_actors - ray.util.state.summarize_objects - ray.util.state.summarize_tasks + ray.util.state.summarize_actors + ray.util.state.summarize_objects + ray.util.state.summarize_tasks List APIs ~~~~~~~~~~ diff --git a/doc/source/ray-observability/reference/index.md b/doc/source/ray-observability/reference/index.md index 06ef3bfc3449..07384079698a 100644 --- a/doc/source/ray-observability/reference/index.md +++ b/doc/source/ray-observability/reference/index.md @@ -2,9 +2,17 @@ # Reference +```{toctree} +:hidden: + +api +cli +system-metrics +``` + Monitor and debug your Ray applications and clusters using the API and CLI documented in these references. The guides include: * {ref}`state-api-ref` * {ref}`state-api-cli-ref` -* {ref}`system-metrics` \ No newline at end of file +* {ref}`system-metrics` diff --git a/doc/source/ray-observability/user-guides/debug-apps/index.md b/doc/source/ray-observability/user-guides/debug-apps/index.md index 2a8aa767d853..8599421cda07 100644 --- a/doc/source/ray-observability/user-guides/debug-apps/index.md +++ b/doc/source/ray-observability/user-guides/debug-apps/index.md @@ -2,10 +2,21 @@ # Debugging Applications +```{toctree} +:hidden: + +general-debugging +debug-memory +debug-hangs +debug-failures +optimize-performance +ray-debugging +``` + These guides help you perform common debugging or optimization tasks for your distributed application on Ray: * {ref}`observability-general-debugging` * {ref}`ray-core-mem-profiling` * {ref}`observability-debug-hangs` * {ref}`observability-debug-failures` * {ref}`observability-optimize-performance` -* {ref}`ray-debugger` \ No newline at end of file +* {ref}`ray-debugger` diff --git a/doc/source/ray-observability/user-guides/index.md b/doc/source/ray-observability/user-guides/index.md index 1db7e61aeea0..d50e4b8b0ae8 100644 --- a/doc/source/ray-observability/user-guides/index.md +++ b/doc/source/ray-observability/user-guides/index.md @@ -2,6 +2,17 @@ # User Guides +```{toctree} +:hidden: + +Debugging Applications +cli-sdk +configure-logging +profiling +add-app-metrics +ray-tracing +``` + These guides help you monitor and debug your Ray applications and clusters. The guides include: @@ -9,4 +20,4 @@ The guides include: * {ref}`observability-programmatic` * {ref}`configure-logging` * {ref}`application-level-metrics` -* {ref}`ray-tracing` \ No newline at end of file +* {ref}`ray-tracing` diff --git a/doc/source/rllib/index.rst b/doc/source/rllib/index.rst index bbe89c457aa8..17eafd2b456a 100644 --- a/doc/source/rllib/index.rst +++ b/doc/source/rllib/index.rst @@ -5,6 +5,18 @@ RLlib: Industry-Grade Reinforcement Learning ============================================ +.. toctree:: + :hidden: + + rllib-training + key-concepts + rllib-env + rllib-algorithms + user-guides + rllib-examples + package_ref/index + + .. image:: images/rllib-logo.png :align: center @@ -54,7 +66,7 @@ PyTorch (or both, as shown below): pip install "ray[rllib]" tensorflow torch -.. margin:: +.. note:: For installation on computers running Apple Silicon (such as M1), please follow instructions `here `._ diff --git a/doc/source/rllib/user-guides.rst b/doc/source/rllib/user-guides.rst index 127d0f9bd554..9725e1491b99 100644 --- a/doc/source/rllib/user-guides.rst +++ b/doc/source/rllib/user-guides.rst @@ -8,6 +8,26 @@ User Guides =========== +.. toctree:: + :hidden: + + rllib-advanced-api + rllib-models + rllib-saving-and-loading-algos-and-policies + rllib-concepts + rllib-sample-collection + rllib-replay-buffers + rllib-offline + rllib-catalogs + rllib-connector + rllib-rlmodule + rllib-learner + rllib-torch2x + rllib-fault-tolerance + rllib-dev + rllib-cli + + .. _rllib-feature-guide: RLlib Feature Guides diff --git a/doc/source/serve/advanced-guides/index.md b/doc/source/serve/advanced-guides/index.md index 44451f8351fb..47b2f6a0a1dd 100644 --- a/doc/source/serve/advanced-guides/index.md +++ b/doc/source/serve/advanced-guides/index.md @@ -1,6 +1,21 @@ (serve-advanced-guides)= # Advanced Guides +```{toctree} +:hidden: + +app-builder-guide +advanced-autoscaling +performance +dyn-req-batch +inplace-updates +dev-workflow +grpc-guide +deployment-graphs +managing-java-deployments +deploy-vm +``` + If you’re new to Ray Serve, we recommend starting with the [Ray Serve Quickstart](serve-getting-started). Use these advanced guides for more options and configurations: diff --git a/doc/source/serve/index.md b/doc/source/serve/index.md index b99f4a8d00b9..f014656080fb 100644 --- a/doc/source/serve/index.md +++ b/doc/source/serve/index.md @@ -2,6 +2,27 @@ # Ray Serve: Scalable and Programmable Serving +```{toctree} +:hidden: + +getting_started +key-concepts +develop-and-deploy +model_composition +multi-app +model-multiplexing +configure-serve-deployment +http-guide +Production Guide +monitoring +resource-allocation +autoscaling-guide +advanced-guides/index +architecture +tutorials/index +api/index +``` + :::{tip} [Get in touch with us](https://docs.google.com/forms/d/1l8HT35jXMPtxVUtQPeGoe09VGp5jcvSv0TqPgyz6lGU) if you're using or considering using Ray Serve. ::: @@ -17,7 +38,7 @@ Ray Serve is a scalable model serving library for building online inference APIs. Serve is framework-agnostic, so you can use a single toolkit to serve everything from deep learning models built with frameworks like PyTorch, TensorFlow, and Keras, to Scikit-Learn models, to arbitrary Python business logic. It has several features and performance optimizations for serving Large Language Models such as response streaming, dynamic request batching, multi-node/multi-GPU serving, etc. -Ray Serve is particularly well suited for [model composition](serve-model-composition) and many model serving, enabling you to build a complex inference service consisting of multiple ML models and business logic all in Python code. +Ray Serve is particularly well suited for [model composition](serve-model-composition) and many model serving, enabling you to build a complex inference service consisting of multiple ML models and business logic all in Python code. Ray Serve is built on top of Ray, so it easily scales to many machines and offers flexible scheduling support such as fractional GPUs so you can share resources and serve many machine learning models at low cost. @@ -151,7 +172,7 @@ Serve supports arbitrary Python code and therefore integrates well with the MLOp :::{dropdown} LLM developer :animate: fade-in-slide-down -Serve enables you to rapidly prototype, develop, and deploy scalable LLM applications to production. Many large language model (LLM) applications combine prompt preprocessing, vector database lookups, LLM API calls, and response validation. Because Serve supports any arbitrary Python code, you can write all these steps as a single Python module, enabling rapid development and easy testing. You can then quickly deploy your Ray Serve LLM application to production, and each application step can independently autoscale to efficiently accommodate user traffic without wasting resources. In order to improve performance of your LLM applications, Ray Serve has features for batching and can integrate with any model optimization technique. Ray Serve also supports streaming responses, a key feature for chatbot-like applications. +Serve enables you to rapidly prototype, develop, and deploy scalable LLM applications to production. Many large language model (LLM) applications combine prompt preprocessing, vector database lookups, LLM API calls, and response validation. Because Serve supports any arbitrary Python code, you can write all these steps as a single Python module, enabling rapid development and easy testing. You can then quickly deploy your Ray Serve LLM application to production, and each application step can independently autoscale to efficiently accommodate user traffic without wasting resources. In order to improve performance of your LLM applications, Ray Serve has features for batching and can integrate with any model optimization technique. Ray Serve also supports streaming responses, a key feature for chatbot-like applications. ::: @@ -222,69 +243,69 @@ or head over to the {doc}`tutorials/index` to get started building your Ray Serv .. grid-item-card:: :class-img-top: pt-2 w-75 d-block mx-auto fixed-height-img - + **Getting Started** ^^^ - + Start with our quick start tutorials for :ref:`deploying a single model locally ` and how to :ref:`convert an existing model into a Ray Serve deployment ` . - + +++ .. button-ref:: serve-getting-started :color: primary :outline: :expand: - - Get Started with Ray Serve - + + Get Started with Ray Serve + .. grid-item-card:: :class-img-top: pt-2 w-75 d-block mx-auto fixed-height-img - + **Key Concepts** ^^^ - + Understand the key concepts behind Ray Serve. Learn about :ref:`Deployments `, :ref:`how to query them `, and using :ref:`DeploymentHandles ` to compose multiple models and business logic together. - + +++ .. button-ref:: serve-key-concepts :color: primary :outline: :expand: - + Learn Key Concepts - + .. grid-item-card:: :class-img-top: pt-2 w-75 d-block mx-auto fixed-height-img - + **Examples** ^^^ - + Follow the tutorials to learn how to integrate Ray Serve with :ref:`TensorFlow `, :ref:`Scikit-Learn `, and :ref:`RLlib `. - + +++ .. button-ref:: serve-examples :color: primary :outline: :expand: - + Serve Examples - + .. grid-item-card:: :class-img-top: pt-2 w-75 d-block mx-auto fixed-height-img - + **API Reference** ^^^ - + Get more in-depth information about the Ray Serve API. - + +++ .. button-ref:: serve-api :color: primary :outline: :expand: - + Read the API Reference - + ``` For more, see the following blog posts about Ray Serve: diff --git a/doc/source/serve/production-guide/index.md b/doc/source/serve/production-guide/index.md index 62a63d84eb23..bc4c9fa08096 100644 --- a/doc/source/serve/production-guide/index.md +++ b/doc/source/serve/production-guide/index.md @@ -2,6 +2,18 @@ # Production Guide +```{toctree} +:hidden: + +config +kubernetes +docker +fault-tolerance +handling-dependencies +best-practices +``` + + The recommended way to run Ray Serve in production is on Kubernetes using the [KubeRay](kuberay-quickstart) [RayService](kuberay-rayservice-quickstart) custom resource. The RayService custom resource automatically handles important production requirements such as health checking, status reporting, failure recovery, and upgrades. If you're not running on Kubernetes, you can also run Ray Serve on a Ray cluster directly using the Serve CLI. diff --git a/doc/source/tune/examples/experiment-tracking.rst b/doc/source/tune/examples/experiment-tracking.rst index 2a14d75b2301..fab518a0c8a7 100644 --- a/doc/source/tune/examples/experiment-tracking.rst +++ b/doc/source/tune/examples/experiment-tracking.rst @@ -1,6 +1,15 @@ Tune Experiment Tracking Examples --------------------------------- +.. toctree:: + :hidden: + + Weights & Biases Example + MLflow Example + Aim Example + Comet Example + + Ray Tune integrates with some popular Experiment tracking and management tools, such as CometML, or Weights & Biases. If you're interested in learning how to use Ray Tune with Tensorboard, you can find more information in our diff --git a/doc/source/tune/examples/hpo-frameworks.rst b/doc/source/tune/examples/hpo-frameworks.rst index b5f61491443b..b65ca87e5212 100644 --- a/doc/source/tune/examples/hpo-frameworks.rst +++ b/doc/source/tune/examples/hpo-frameworks.rst @@ -1,6 +1,20 @@ Tune Hyperparameter Optimization Framework Examples --------------------------------------------------- +.. toctree:: + :hidden: + + Ax Example + Dragonfly Example + HyperOpt Example + Bayesopt Example + FLAML Example + BOHB Example + Nevergrad Example + Optuna Example + SigOpt Example + + Tune integrates with a wide variety of hyperparameter optimization frameworks and their respective search algorithms. Here you can find detailed examples on each of our integrations: diff --git a/doc/source/tune/examples/index.rst b/doc/source/tune/examples/index.rst index d7636d11e2da..e96d87149135 100644 --- a/doc/source/tune/examples/index.rst +++ b/doc/source/tune/examples/index.rst @@ -4,6 +4,16 @@ Ray Tune Examples ================= +.. toctree:: + :hidden: + + ml-frameworks + experiment-tracking + hpo-frameworks + Other Examples + Exercises + + .. tip:: Check out :ref:`the Tune User Guides ` To learn more about Tune's features in depth. .. _tune-recipes: diff --git a/doc/source/tune/examples/ml-frameworks.rst b/doc/source/tune/examples/ml-frameworks.rst index 053f6295738b..a8f16f1e1488 100644 --- a/doc/source/tune/examples/ml-frameworks.rst +++ b/doc/source/tune/examples/ml-frameworks.rst @@ -1,6 +1,21 @@ Examples using Ray Tune with ML Frameworks ------------------------------------------ +.. toctree:: + :hidden: + + Scikit-Learn Example + Keras Example + PyTorch Example + PyTorch Lightning Example + Ray Serve Example + Ray RLlib Example + XGBoost Example + LightGBM Example + Horovod Example + Hugging Face Transformers Example + + Ray Tune integrates with many popular machine learning frameworks. Here you find a few practical examples showing you how to tune your models. At the end of these guides you will often find links to even more examples. @@ -88,7 +103,7 @@ At the end of these guides you will often find links to even more examples. .. button-ref:: tune-huggingface-example A Guide To Tuning Huggingface Transformers With Tune - + .. grid-item-card:: :img-top: /images/tune.png :class-img-top: pt-2 w-75 d-block mx-auto fixed-height-img diff --git a/doc/source/tune/examples/pbt_guide.ipynb b/doc/source/tune/examples/pbt_guide.ipynb index 8f0700a1d572..2629de56e5ee 100644 --- a/doc/source/tune/examples/pbt_guide.ipynb +++ b/doc/source/tune/examples/pbt_guide.ipynb @@ -10,6 +10,12 @@ "\n", "# A Guide to Population Based Training with Tune\n", "\n", + "```{toctree}\n", + ":hidden:\n", + "\n", + "Visualizing and Understanding PBT \n", + "```\n", + "\n", "Tune includes a distributed implementation of [Population Based Training (PBT)](https://www.deepmind.com/blog/population-based-training-of-neural-networks) as\n", "a [scheduler](tune-scheduler-pbt).\n", "\n", @@ -787,7 +793,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3.8.9 64-bit", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -801,7 +807,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.9" + "version": "3.10.8" }, "vscode": { "interpreter": { diff --git a/doc/source/tune/index.rst b/doc/source/tune/index.rst index c6ba74ad5468..c5197427a648 100644 --- a/doc/source/tune/index.rst +++ b/doc/source/tune/index.rst @@ -3,6 +3,16 @@ Ray Tune: Hyperparameter Tuning =============================== +.. toctree:: + :hidden: + + Getting Started + Key Concepts + tutorials/overview + examples/index + faq + api/api + .. image:: images/tune_overview.png :scale: 50% :align: center diff --git a/doc/source/tune/tutorials/overview.rst b/doc/source/tune/tutorials/overview.rst index 4d4fefa45918..41704349baf7 100644 --- a/doc/source/tune/tutorials/overview.rst +++ b/doc/source/tune/tutorials/overview.rst @@ -4,6 +4,26 @@ User Guides =========== +.. toctree:: + :hidden: + + Running Basic Experiments + tune-output + Setting Trial Resources + Using Search Spaces + tune-stopping + tune-trial-checkpoints + tune-storage + tune-fault-tolerance + Using Callbacks and Metrics + tune_get_data_in_and_out + ../examples/tune_analyze_results + ../examples/pbt_guide + Deploying Tune in the Cloud + Tune Architecture + Scalability Benchmarks + + .. tip:: We'd love to hear your feedback on using Tune - `get in touch `_! In this section, you can find material on how to use Tune and its various features. diff --git a/doc/source/workflows/index.rst b/doc/source/workflows/index.rst index e76d6344711e..eaa0bdd50bf5 100644 --- a/doc/source/workflows/index.rst +++ b/doc/source/workflows/index.rst @@ -3,6 +3,18 @@ Ray Workflows: Durable Ray Task Graphs ====================================== +.. toctree:: + :hidden: + + key-concepts + basics + management + metadata + events + comparison + advanced + api/api + .. warning:: Ray Workflows is available as **alpha** in Ray 2.0+. Expect rough corners and