Skip to content

Latest commit

 

History

History
 
 

financial_time_series

Using Kubeflow for Financial Time Series

In this example, we will walk through the exploration, training and serving of a machine learning model by leveraging Kubeflow's main components.  We will use the Machine Learning with Financial Time Series Data use case.

Pre-requisites

You can use a Google Cloud Shell to follow the steps outlined below. In that case you can skip the requirements below as these depencies are pre-installed with the exception that you might still need to install ksonnet via these instructions. You might also need to install uuid-runtime via sudo apt-get install uuid-runtime.

Alternatively, you can work from your local environment. In that case you will need a Linux or Mac environment with Python 3.6.x and install the following requirements

Independent of the machine that you are using, you will need access to a Google Cloud Project and its GKE resources.

Deploying Kubeflow on GKE

The full deployment script for Kubeflow on GKE will create a cluster for you with machines that already have all the appropiate permissions. In the steps below we will actually use the deployment script for minikube but deployed on infrastructure in the cloud instead of your local machine. The reason for this is that we want to focus on the abilities of Kubeflow in this example rather than making sure it is set up securely for a production-like environment. If interested, instructions on how to deploy Kubeflow to GKE with authentification can be found on Kubeflow's getting-started-GKE page.

We will first create a cluster named 'kubeflow' on google kubernetes engine. The following set of commands clones the repository, creates a cluster, connects our local environment to the cluster and changes the permissions on the cluster to allow Kubeflow to run properly. Make sure to replace the placeholder parameters between <> with proper values in the commands.

git clone https://github.com/kubeflow/examples.git
cd examples/financial_time_series
gcloud container clusters create kubeflow --zone <your-zone>  --machine-type n1-standard-4 --scopes=https://www.googleapis.com/auth/cloud-platform
gcloud container clusters get-credentials kubeflow --zone <your-zone> --project <your-project-name>
kubectl create clusterrolebinding default-admin  --clusterrole=cluster-admin --user=<your-email-address>

Note that we had to define the scopes specifically since Kubernetes v1.10. If we would drop the scopes argument, the machines in the cluster have a lot of restrictions to use Google Cloud APIs to connect to other Google Cloud services such as Google Cloud Storage, BigQuery etc. Note that for this last command you will need to have container.clusterRoleBindings.create permission, which you automatically have as a project owner. Our cluster is now up and running and properly set up in order to install Kubeflow. Note that it requires only a single command to deploy Kubeflow to an existing cluster.

export KUBEFLOW_VERSION=0.2.2
curl https://raw.githubusercontent.com/kubeflow/kubeflow/v${KUBEFLOW_VERSION}/scripts/deploy.sh | bash

As said before, the script above is the deployment script for minikube but deployed on infrastructure in the cloud instead of your local machine.

Once the script is finished, you should two new folders in your directory.

$ tree
.
├── kubeflow_ks_app
└── kubeflow_repo

Next, we can easily verify the status of the pods by running kubectl get pods:

NAME                                        READY     STATUS
ambassador-788655d76f-8fkpv                 2/2       Running
ambassador-788655d76f-fvjld                 2/2       Running
ambassador-788655d76f-t4xqt                 2/2       Running
centraldashboard-6665fc46cb-jrwvf           1/1       Running
spartakus-volunteer-9c546f4db-5pztt         1/1       Running
tf-hub-0                                    1/1       Running
tf-job-dashboard-644865ddff-fbwnw           1/1       Running
tf-job-operator-v1alpha2-75bcb7f5f7-fgf9h   1/1       Running

Note that the corresponding services were also generated by checking the output of kubectl get svc .

Exploration via tf-hub

The TF-hub component of Kubeflow allows us to leverage JupyterHub to investigate the data and start building a feasible machine learning model for the specific problem. In order to access this component, we will set up port-forwarding between the TF-hub pod and our machine.

POD=`kubectl get pods --selector=app=tf-hub | awk '{print $1}' | tail -1`
kubectl port-forward $POD 8000:8000 2>&1 >/dev/null &

You should now be able to access the TF-hub via localhost:8000. After filling in a dummy username and password you are prompted to select parameters to spawn a JupyterHub. In this case, we will just set the image to gcr.io/kubeflow-images-public/tensorflow-1.8.0-notebook-cpu:v0.2.1 and hit spawn.

The following steps for configuring and running the Jupyter Notebook work better on a local machine kernel as the Google Cloud Shell is not meant to stand up a web socket service and is not configured for that. Note that this is not a compulsory step in order to be able to follow the next sections, so if you are working on a Google Cloud Shell you can simply investigate the notebook via the link below.

Once the JupyterHub instance is ready, we will launch a terminal on the instance to install the required packages that our code uses. In order to launch a terminal, click 'new' > 'terminal' and subsequently install the required packages.

pip3 install google-cloud-bigquery==1.6.0 --user

Once the package is installed, navigate back to the JupyterHub home screen. Our JupyterHub instance should be ready to run the code from the slightly adjusted notebook Machine Learning with Financial Time Series Data.ipynb, which is available here. You can simply upload the notebook and walk through it step by step to better understand the problem and suggested solution(s). In this example, the goal is not focus on the notebook itself but rather on how this notebook is being translated in more scalable training jobs and later on serving.

Training at scale with TF-jobs

The next step is to 're-factor' the notebook code into Python scripts which can then be containerized onto a Docker image. In the folder tensorflow-model you can find these scripts together with a Dockerfile. Subsequently we will build a docker image on Google Cloud by running following command:

cd tensorflow-model/
export TRAIN_PATH=gcr.io/<project>/<image-name>/cpu:v1
gcloud builds submit --tag $TRAIN_PATH .

Now that we have an image ready on Google Cloud Container Registry, it's time we start launching a training job.

cd ../kubeflow_ks_app
ks generate tf-job-simple train

This Ksonnet protoytype needs to be slightly modified to our needs, you can simply copy an updated version of this prototype by copying the updated version from the repository.

cp ../tensorflow_model/CPU/train.jsonnet ./components/train.jsonnet

Now we need to define the parameters which are currently set as placeholders in the training job prototype. Note that this introduces a flexible and clean way of working, by changing the parameters you can easily launch another training job without maintaining multiple YAML files in your repository.

# create storage bucket that will be used to store models
$ BUCKET_NAME = <your-bucket-name>
$ gsutil mb gs://$BUCKET_NAME/
# set parameters
export TRAINING_NAME=trainingjob1
ks param set train name $TRAINING_NAME
ks param set train namespace "default"
export TRAIN_PATH=gcr.io/<project-name>/<image-name>/cpu:v1
ks param set train image $TRAIN_PATH
ks param set train workingDir "opt/workdir"
ks param set train args -- python,run_train.py,--model=FlatModel,--epochs=30001,--bucket=$BUCKET_NAME,--version=1

You can verify the parameter settings in the params.libsonnet in the directory kubeflow_ks_app/components. This file keeps track of all the parameters used to instantiate components from prototypes. In order to submit our tf-job, we need to add our cloud cluster as an environment. Next we can launch the tf-job to our cloud environment and follow the progress via the logs of the pod.

ks env add cloud
ks apply cloud -c train
POD_NAME=$(kubectl get pods --selector=tf_job_key=$TRAINING_NAME,tf-replica-type=worker \
      --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
kubectl logs -f $POD_NAME

In the logs you can see that the trained model is being exported to google cloud storage. This saved model will be used later on for serving requests. With these parameters, the accuracy on the test set is approximating about 60%.  Alternatively, you can also port-forward the ambassador and check the progress on localhost:8080. The ambassador functions as the central point of the Kubeflow deployment and monitors the different components. From the ambassador, you can see the Jupyter Notebooks, tf-jobs and kubernetes resources.

POD=`kubectl get pods --selector=service=ambassador | awk '{print $1}' | tail -1`
kubectl port-forward $POD 8080:80 2>&1 >/dev/null &

Deploy and serve with TF-serving

Once the model is trained, the next step will be to deploy it and serve requests. Kubeflow comes with a TF-serving module which you can use to deploy your model with only a few commands.

ks generate tf-serving serve --name=tf-serving
ks param set serve modelPath gs://$BUCKET_NAME/
ks apply cloud -c serve

After running these commands, a deployment and service will be launched on Kubernetes that will enable you to easily send requests to get predictions from your module. We will do a local test via GRPC to illustrate how to get results from this serving component. Once the pod is up we can set up port-forwarding to our localhost.

POD=`kubectl get pods --selector=app=tf-serving | awk '{print $1}' | tail -1`
kubectl port-forward $POD 9000:9000 2>&1 >/dev/null &

Now the only thing we need to do is send a request to localhost:9000 with the expected input of the saved model and it will return a prediction. The saved model expects a time series from closing stocks and spits out the prediction as a 0 (S&P closes positive) or 1 (S&P closes negative) together with the version of the saved model which was memorized upon saving the model. Let's start with a script that populates a request with random numbers to test the service.

cd ..
pip3 install numpy tensorflow-serving-api
python3 -m tensorflow_model.serving_requests.request_random

The output should return an integer, 0 or 1 as explained above, and a string that represents the version. There is another script available that builds a more practical request, with time series data of closing stocks for a certain date. In the following script, the same date is used as the one used at the end of the notebook Machine Learning with Financial Time Series Data.ipynb for comparison reasons.

pip3 install pandas
python3 -m tensorflow_model.serving_requests.request

The response should indicate that S&P index is expected to close positive but from the actual data (which is prospected in the notebook mentioned above) we can see that it actually closed negative that day. Let's get back to training and see if we can improve our accuracy.

Running another tf-job and serving update

Most likely a single training job will never be sufficient. It is very common to create a continuous training pipeline to iterate training and verify the output. Submitting another training job with Kubeflow is very easy. By simply adjusting the parameters we can instantiate another component from the train.jsonnetprototype. This time, we will train a more complex neural network with several hidden layers.

cd kubeflow_ks_app
export TRAINING_NAME=trainingjob2
ks param set train name $TRAINING_NAME
ks param set train args -- python,run_train.py,--model=DeepModel,--epochs=30001,--bucket=$BUCKET_NAME,--version=2
ks apply cloud -c train

Verify the logs or use kubectl describe tfjobs trainingjob2

POD_NAME=$(kubectl get pods --selector=tf_job_key=$TRAINING_NAME,tf-replica-type=worker \
      --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
kubectl logs -f $POD_NAME

You should notice that the training now takes a few minutes instead of less than one minute, however the accuracy on the test set is now 72%. Our training job uploads the trained model to the serving directory of our running tf-serving component. The tf-serving component watches this serving directory and automatically loads the model of the folder with the highest version (as integer). Since the newer version has a higher number than the previous one, our tf-serving should have switched to this new model. Let's see if we get a response from the new version and if the new model gets it right this time.

cd ..
python3 -m tensorflow_model.serving_requests.request

The response returns the updated version number '2' and predicts the correct output 1, which means the S&P index closes negative, hurray!

Running TF-job on a GPU

Can we also run the tf-job on a GPU? Imagine the training job does not just take a few minutes but rather hours or days. We will need another image that installs tensorflow-gpu and has the necessary drivers.

cd tensorflow_model
cp GPU/Dockerfile ./Dockerfile
export TRAIN_PATH_GPU=gcr.io/<project-name>/<image-name>/gpu:v1
gcloud builds submit --tag $TRAIN_PATH_GPU .

Also the train.jsonnet will need to be slightly adjusted to make it flexible to also run on GPUs. You can simply copy the adjusted jsonnet by running following command.

cp GPU/train.jsonnet ../kubeflow_ks_app/components/train.jsonnet

Now we have to add a GPU to our cloud cluster. We will create a separate pool and install the necessary NVIDIA GPU device drivers. For more instruction on how to handle GPUs on Kubernetes, see https://cloud.google.com/kubernetes-engine/docs/how-to/gpus.

gcloud container node-pools create gpu-pool --accelerator type=nvidia-tesla-k80,count=1 --zone europe-west1-b --cluster kubeflow --num-nodes 1 --min-nodes 1 --max-nodes 1 --enable-autoscaling --scopes=https://www.googleapis.com/auth/cloud-platform
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/stable/nvidia-driver-installer/cos/daemonset-preloaded.yaml

Subsequently, the parameters must be updated to fit with new prototype in train.jsonnet.

cd ../kubeflow_ks_app
export TRAINING_NAME=trainingjob3
ks param set train name $TRAINING_NAME
ks param delete train image
ks param set train cpuImage $TRAIN_PATH
ks param set train gpuImage $TRAIN_PATH_GPU
ks param set train num_gpu 1

Next we can deploy the tf-job to our GPU by simpy running following command.

ks apply cloud -c train

Once the pod is up, you can check the logs and verify that the training time is significantly reduced compared to the previous tf-job.

POD_NAME=$(kubectl get pods --selector=tf_job_key=$TRAINING_NAME,tf-replica-type=worker \
      --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
kubectl logs -f $POD_NAME

Clean up

kubectl delete tfjobs trainingjob1
kubectl delete tfjobs trainingjob2
kubectl delete tfjobs trainingjob3
ks delete cloud -c train
ks delete cloud -c serve
ks delete cloud -c kubeflow-core
gcloud container clusters delete kubeflow

Summary

Kubeflow makes it easy for everyone to develop, deploy, and manage portable, scalable ML everywhere and supports the full lifecycle of an ML product, including iteration via Jupyter notebooks. It removes the need for expertise in a large number of areas, reducing the barrier to entry for developing and maintaining ML products.

If you want to get started with Kubeflow, make sure to checkout the updated information on https://www.kubeflow.org.