From bff24acec2fd8a2d4d833c1e95c712085b33c763 Mon Sep 17 00:00:00 2001 From: Billy Lamberta Date: Wed, 30 May 2018 00:33:29 -0700 Subject: [PATCH] Copy edits following Mark's updates. --- samples/core/get_started/eager.ipynb | 375 ++++++++++++++++++--------- 1 file changed, 249 insertions(+), 126 deletions(-) diff --git a/samples/core/get_started/eager.ipynb b/samples/core/get_started/eager.ipynb index fa482bcdede..206fc57b07b 100644 --- a/samples/core/get_started/eager.ipynb +++ b/samples/core/get_started/eager.ipynb @@ -5,6 +5,8 @@ "colab": { "name": "eager.ipynb", "version": "0.3.2", + "views": {}, + "default_view": {}, "provenance": [], "private_outputs": true, "collapsed_sections": [], @@ -32,7 +34,12 @@ "metadata": { "id": "CPII1rGR2rF9", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -76,17 +83,22 @@ }, "cell_type": "markdown", "source": [ - "This tutorial describes how to use machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to (1) build a *model*, (2) *train* the model on example data, and (3) use the model to make *predictions* on unknown data. Machine learning experience isn't required to follow this guide, but you'll need to read some Python code.\n", + "This guide uses machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to:\n", + "1. Build a *model*,\n", + "2. *Train* this model on example data, and\n", + "3. Use the model to make *predictions* about unknown data.\n", + "\n", + "Machine learning experience isn't required, but you'll need to read some Python code.\n", "\n", "## TensorFlow programming\n", "\n", - "There many [TensorFlow APIs](https://www.tensorflow.org/api_docs/python/) available, but we recommend starting with these high-level TensorFlow concepts:\n", + "There are many [TensorFlow APIs](https://www.tensorflow.org/api_docs/python/) available, but start with these high-level TensorFlow concepts:\n", "\n", "* Enable an [eager execution](https://www.tensorflow.org/programmers_guide/eager) development environment,\n", "* Import data with the [Datasets API](https://www.tensorflow.org/programmers_guide/datasets),\n", "* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).\n", "\n", - "This tutorial shows these APIs and is structured like many other TensorFlow programs:\n", + "This tutorial is structured like many TensorFlow programs:\n", "\n", "1. Import and parse the data sets.\n", "2. Select the type of model.\n", @@ -94,11 +106,13 @@ "4. Evaluate the model's effectiveness.\n", "5. Use the trained model to make predictions.\n", "\n", - "To learn more about using TensorFlow, see the [Getting Started guide](https://www.tensorflow.org/get_started/) and the [example tutorials](https://www.tensorflow.org/tutorials/). If you'd like to learn about the basics of machine learning, consider taking the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/).\n", + "For more TensorFlow examples, see the [Get Started](https://www.tensorflow.org/get_started/) and [Tutorials](https://www.tensorflow.org/tutorials/) sections. To learn machine learning basics, consider taking the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/).\n", "\n", "## Run the notebook\n", "\n", - "This tutorial is available as an interactive [Colab notebook](https://colab.research.google.com) for you to run and change the Python code directly in the browser. The notebook handles setup and dependencies while you \"play\" cells to execute the code blocks. This is a fun way to explore the program and test ideas. If you are unfamiliar with Python notebook environments, there are a couple of things to keep in mind:\n", + "This tutorial is available as an interactive [Colab notebook](https://colab.research.google.com) that can execute and modify Python code directly in the browser. The notebook handles setup and dependencies while you \"play\" cells to run the code blocks. This is a fun way to explore the program and test ideas.\n", + "\n", + "If you are unfamiliar with Python notebook environments, there are a couple of things to keep in mind:\n", "\n", "1. Executing code requires connecting to a runtime environment. In the Colab notebook menu, select *Runtime > Connect to runtime...*\n", "2. Notebook cells are arranged sequentially to gradually build the program. Typically, later code cells depend on prior code cells, though you can always rerun a code block. To execute the entire notebook in order, select *Runtime > Run all*. To rerun a code cell, select the cell and click the *play icon* on the left." @@ -130,7 +144,12 @@ "metadata": { "id": "jBmKxLVy9Uhg", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -148,7 +167,7 @@ "source": [ "### Configure imports and eager execution\n", "\n", - "Import the required Python modules, including TensorFlow, and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/programmers_guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, you'll feel at home.\n", + "Import the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/programmers_guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar.\n", "\n", "Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager) for more details." ] @@ -157,7 +176,12 @@ "metadata": { "id": "g4Wzg69bnwK2", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -188,7 +212,7 @@ "\n", "Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to statistically classify flowers. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).\n", "\n", - "The Iris genus entails about 300 species, but our program will classify only the following three:\n", + "The Iris genus entails about 300 species, but our program will only classify the following three:\n", "\n", "* Iris setosa\n", "* Iris virginica\n", @@ -216,7 +240,7 @@ "source": [ "## Import and parse the training dataset\n", "\n", - "We need to download the dataset file and convert it to a structure that can be used by this Python program.\n", + "Download the dataset file and convert it to a structure that can be used by this Python program.\n", "\n", "### Download the dataset\n", "\n", @@ -227,7 +251,12 @@ "metadata": { "id": "J6c7uEU9rjRM", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -257,7 +286,12 @@ "metadata": { "id": "FQvb_JYdrpPm", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -273,7 +307,7 @@ }, "cell_type": "markdown", "source": [ - "From this view of the dataset, we see the following:\n", + "From this view of the dataset, notice the following:\n", "\n", "1. The first line is a header containing information about the dataset:\n", " * There are 120 total examples. Each example has four features and one of three possible label names. \n", @@ -288,13 +322,23 @@ "metadata": { "id": "9Edhevw7exl6", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ + "# column order in CSV file\n", "column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']\n", + "\n", "feature_names = column_names[:-1]\n", - "label_name = column_names[-1]" + "label_name = column_names[-1]\n", + "\n", + "print(\"Features: {}\".format(feature_names))\n", + "print(\"Label: {}\".format(label_name))" ], "execution_count": 0, "outputs": [] @@ -319,7 +363,12 @@ "metadata": { "id": "sVNlJlUOhkoX", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -337,23 +386,30 @@ "source": [ "### Create a `tf.data.Dataset`\n", "\n", - "TensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for feeding data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n", + "TensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n", "\n", "\n", - "Since our dataset is a CSV-formatted text file, we'll use the the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to easily parse the data into a suitable format. This function is meant to generate data for training models so the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). Also note the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter." + "Since the dataset is a CSV-formatted text file, use the the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter." ] }, { "metadata": { "id": "WsxHnz1ebJ2S", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "batch_size=32\n", + "batch_size = 32\n", + "\n", "train_dataset = tf.contrib.data.make_csv_dataset(\n", - " train_dataset_fp, batch_size, \n", + " train_dataset_fp,\n", + " batch_size, \n", " column_names=column_names,\n", " label_name=label_name,\n", " num_epochs=1)" @@ -368,49 +424,63 @@ }, "cell_type": "markdown", "source": [ - "This function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a `{'feature_name': value}` dictionary.\n", + "The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`\n", "\n", "With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features:" ] }, { "metadata": { - "id": "kRP72tP9C0Qw", + "id": "iDuG94H-C122", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "features, labels = next(iter(train_dataset))" + "features, labels = next(iter(train_dataset))\n", + "\n", + "features" ], "execution_count": 0, "outputs": [] }, { "metadata": { - "id": "iDuG94H-C122", - "colab_type": "code", - "colab": {} + "id": "E63mArnQaAGz", + "colab_type": "text" }, - "cell_type": "code", + "cell_type": "markdown", "source": [ - "features" - ], - "execution_count": 0, - "outputs": [] + "Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.\n", + "\n", + "You can start to see some clusters by plotting a few features from the batch:" + ] }, { "metadata": { "id": "me5Wn-9FcyyO", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "plt.scatter(features['petal_length'], features['sepal_length'], \n", - " c=labels, cmap='viridis')\n", - "plt.xlabel(\"Petal Length\")\n", - "plt.ylabel(\"Sepal Length\")\n" + "plt.scatter(features['petal_length'],\n", + " features['sepal_length'],\n", + " c=labels,\n", + " cmap='viridis')\n", + "\n", + "plt.xlabel(\"Petal length\")\n", + "plt.ylabel(\"Sepal length\");" ], "execution_count": 0, "outputs": [] @@ -422,22 +492,28 @@ }, "cell_type": "markdown", "source": [ - "To simplify the model building, let's repackage the features dictionary into an array with shape `(batch_size, num_features)`.\n", - "\n", - "To do this we'll write a simple function using the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method to pack the features into a single array.\n", + "To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.\n", "\n", - "Stack takes a list of tensors, and stacks them along a new axis, like this:\n" + "This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension." ] }, { "metadata": { - "id": "lSI2KLB4CAtc", + "id": "jm932WINcaGU", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "tf.stack(list(features.values()), axis=1)[:10]" + "def pack_features_vector(features, labels):\n", + " \"\"\"Pack the features into a single array.\"\"\"\n", + " features = tf.stack(list(features.values()), axis=1)\n", + " return features, labels" ], "execution_count": 0, "outputs": [] @@ -449,21 +525,22 @@ }, "cell_type": "markdown", "source": [ - "Then we'll use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to stack the `features` in each `(features,label)` pair in the dataset. " + "Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset:" ] }, { "metadata": { - "id": "jm932WINcaGU", + "id": "ZbDkzGZIkpXf", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "def pack_features_vector(features,labels):\n", - " features = tf.stack(list(features.values()), axis=1)\n", - " return features, labels\n", - " \n", "train_dataset = train_dataset.map(pack_features_vector)" ], "execution_count": 0, @@ -483,13 +560,18 @@ "metadata": { "id": "kex9ibEek6Tr", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ "features, labels = next(iter(train_dataset))\n", "\n", - "print(features[:10])" + "print(features[:5])" ], "execution_count": 0, "outputs": [] @@ -544,7 +626,12 @@ "metadata": { "id": "2fZ6oL2ig3ZK", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -564,7 +651,7 @@ }, "cell_type": "markdown", "source": [ - "The *[activation function](https://developers.google.com/machine-learning/crash-course/glossary#activation_function)* determines the output shape of each node in the layer. These non-linearities are important as without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossary#ReLU) is common for hidden layers.\n", + "The *[activation function](https://developers.google.com/machine-learning/crash-course/glossary#activation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossary#ReLU) is common for hidden layers.\n", "\n", "The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively." ] @@ -585,7 +672,12 @@ "metadata": { "id": "xe6SQ5NrpB-I", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -602,16 +694,21 @@ }, "cell_type": "markdown", "source": [ - "For each example it returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logit) for each class. \n", + "Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossary#logit) for each class. \n", "\n", - "To convert to a probability for each class, for each example, we use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:" + "To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossary#softmax) function:" ] }, { "metadata": { "id": "_tRwHZmTNTX2", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -627,33 +724,24 @@ }, "cell_type": "markdown", "source": [ - "Taking the `tf.argmax` across the classes would give us the predicted class index.\n", - "\n", - "The model hasn't been trained yet, so these aren't very good predictions." + "Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions." ] }, { "metadata": { "id": "-Jzm_GoErz8B", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "tf.argmax(predictions, axis=1)" - ], - "execution_count": 0, - "outputs": [] - }, - { - "metadata": { - "id": "8w3eDAp9o0G9", - "colab_type": "code", - "colab": {} - }, - "cell_type": "code", - "source": [ - "labels" + "print(\"Prediction: {}\".format(tf.argmax(predictions, axis=1)))\n", + "print(\" Labels: {}\".format(labels))" ], "execution_count": 0, "outputs": [] @@ -690,37 +778,22 @@ "metadata": { "id": "tMAT4DcMPwI-", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ "def loss(model, x, y):\n", " y_ = model(x)\n", " return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)\n", - "\n" - ], - "execution_count": 0, - "outputs": [] - }, - { - "metadata": { - "id": "xSFT90LsQRNV", - "colab_type": "text" - }, - "cell_type": "markdown", - "source": [ - "Let's test out this function:" - ] - }, - { - "metadata": { - "id": "uDdxM3aeQQyx", - "colab_type": "code", - "colab": {} - }, - "cell_type": "code", - "source": [ - "loss(model, features, labels)" + "\n", + "\n", + "l = loss(model, features, labels)\n", + "print(\"Loss test: {}\".format(l))" ], "execution_count": 0, "outputs": [] @@ -732,14 +805,19 @@ }, "cell_type": "markdown", "source": [ - "To perform the optimization we will use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)." + "Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)." ] }, { "metadata": { "id": "x57HcKWhKkei", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -782,20 +860,25 @@ }, "cell_type": "markdown", "source": [ - "Let's setup the optimizer, and the `global_step` counter:" + "Let's setup the optimizer and the `global_step` counter:" ] }, { "metadata": { "id": "8xxi2NNGKwG_", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ - "\n", "optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)\n", - "global_step=tf.train.get_or_create_global_step()" + "\n", + "global_step = tf.train.get_or_create_global_step()" ], "execution_count": 0, "outputs": [] @@ -807,27 +890,31 @@ }, "cell_type": "markdown", "source": [ - "Now let's take a single optimization step:" + "We'll use this to calculate a single optimization step:" ] }, { "metadata": { "id": "rxRNTFVe56RG", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ "loss_value, grads = grad(model, features, labels)\n", "\n", - "print(\"Step: \", global_step.numpy())\n", - "print(\"Initial loss:\", loss_value.numpy())\n", + "print(\"Step: {}, Initial Loss: {}\".format(global_step.numpy(),\n", + " loss_value.numpy()))\n", "\n", "optimizer.apply_gradients(zip(grads, model.variables), global_step)\n", "\n", - "print()\n", - "print(\"Step: \", global_step.numpy())\n", - "print(\"Loss: \", loss(model, features, labels).numpy())" + "print(\"Step: {}, Loss: {}\".format(global_step.numpy(),\n", + " loss(model, features, labels).numpy()))" ], "execution_count": 0, "outputs": [] @@ -843,7 +930,7 @@ "\n", "With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:\n", "\n", - "1. Iterate each epoch. An epoch is one pass through the dataset.\n", + "1. Iterate each *epoch*. An epoch is one pass through the dataset.\n", "2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).\n", "3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.\n", "4. Use an `optimizer` to update the model's variables.\n", @@ -857,7 +944,12 @@ "metadata": { "id": "AIgulGRUhpto", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -923,7 +1015,12 @@ "metadata": { "id": "agjvNd2iUGFn", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -1003,7 +1100,12 @@ "metadata": { "id": "Ps3_9dJ3Lodk", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -1019,12 +1121,18 @@ "metadata": { "id": "SRMWCu30bnxH", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ "test_dataset = tf.contrib.data.make_csv_dataset(\n", - " train_dataset_fp, batch_size, \n", + " train_dataset_fp,\n", + " batch_size, \n", " column_names=column_names,\n", " label_name='species',\n", " num_epochs=1,\n", @@ -1051,7 +1159,12 @@ "metadata": { "id": "Tw03-MK1cYId", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -1081,7 +1194,12 @@ "metadata": { "id": "uNwt2eMeOane", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [ @@ -1112,7 +1230,12 @@ "metadata": { "id": "kesTS5Lzv-M2", "colab_type": "code", - "colab": {} + "colab": { + "autoexec": { + "startup": false, + "wait_interval": 0 + } + } }, "cell_type": "code", "source": [