From 2e3a9a3afe248e5df7b05f0543bdac1f780e0212 Mon Sep 17 00:00:00 2001 From: Tommy Li Date: Thu, 27 Jul 2023 10:19:28 -0700 Subject: [PATCH] add readme for prompt tuning samples --- samples/README.md | 2 ++ samples/huggingface-prompt-tuning/README.md | 3 +++ samples/peft-modelmesh-pipeline/README.md | 3 +++ 3 files changed, 8 insertions(+) create mode 100644 samples/huggingface-prompt-tuning/README.md create mode 100644 samples/peft-modelmesh-pipeline/README.md diff --git a/samples/README.md b/samples/README.md index 828c978a67..0219161e38 100644 --- a/samples/README.md +++ b/samples/README.md @@ -29,3 +29,5 @@ If you are interested more in the larger list of pipelines samples we are testin + [Using Tekton Custom Task on KFP](/samples/tekton-custom-task) + [The flip-coin pipeline using custom task](/samples/flip-coin-custom-task) + [Retrieve KFP run metadata using Kubernetes downstream API](/samples/k8s-downstream-api) ++ [Automate prompt tuning for large language models using KubeFlow Pipelines](/samples/huggingface-prompt-tuning) ++ [Serve large language models (LLMs) with custom prompt tuning configuration using Kubeflow Pipelines](/samples/peft-modelmesh-pipeline) diff --git a/samples/huggingface-prompt-tuning/README.md b/samples/huggingface-prompt-tuning/README.md new file mode 100644 index 0000000000..99e16ad2ac --- /dev/null +++ b/samples/huggingface-prompt-tuning/README.md @@ -0,0 +1,3 @@ +# Automate prompt tuning for large language models using KubeFlow Pipelines + +This sample is from the IBM developer tutorial [Automate prompt tuning for large language models using KubeFlow Pipelines](https://developer.ibm.com/learningpaths/kubeflow-pipelines/automate-prompt-tuning-for-llms/). Please visit the tutorial for more details and instructions. \ No newline at end of file diff --git a/samples/peft-modelmesh-pipeline/README.md b/samples/peft-modelmesh-pipeline/README.md new file mode 100644 index 0000000000..454abaa3a7 --- /dev/null +++ b/samples/peft-modelmesh-pipeline/README.md @@ -0,0 +1,3 @@ +# Serve large language models (LLMs) with custom prompt tuning configuration using Kubeflow Pipelines + +This sample is from the IBM developer tutorial [Serve large language models (LLMs) with custom prompt tuning configuration using Kubeflow Pipelines](https://developer.ibm.com/learningpaths/kubeflow-pipelines/serve-llms-custom-prompt-tuning/). Please visit the tutorial for more details and instructions. \ No newline at end of file