From 31784e74e056149c2ddc771168162329f4e6eba3 Mon Sep 17 00:00:00 2001 From: Kiril Gorovoy Date: Fri, 12 Feb 2016 16:04:12 -0800 Subject: [PATCH] One more minor documentation fix. --- tensorflow_serving/g3doc/serving_advanced.md | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/tensorflow_serving/g3doc/serving_advanced.md b/tensorflow_serving/g3doc/serving_advanced.md index 32d660a44cb..457d2363b53 100644 --- a/tensorflow_serving/g3doc/serving_advanced.md +++ b/tensorflow_serving/g3doc/serving_advanced.md @@ -1,7 +1,3 @@ ---- ---- - - # Serving Dynamically Updated TensorFlow Model with Batching This tutorial shows you how to use TensorFlow Serving components to build a @@ -10,7 +6,7 @@ TensorFlow model. You'll also learn how to use TensorFlow Serving batcher to do batched inference. The code examples in this tutorial focus on the discovery, batching, and serving logic. If you just want to use TensorFlow Serving to serve a single version model without batching, see -[TensorFlow Serving basic tutorial](serving_basic). +[TensorFlow Serving basic tutorial](serving_basic.md). This tutorial uses the simple Softmax Regression model introduced in the TensorFlow tutorial for handwritten image (MNIST data) classification. If you @@ -33,7 +29,7 @@ This tutorial steps through the following tasks: 4. Serve request with TensorFlow Serving manager. 5. Run and test the service. -Before getting started, please complete the [prerequisites](setup#prerequisites). +Before getting started, please complete the [prerequisites](setup.md#prerequisites). ## Train And Export TensorFlow Model @@ -58,7 +54,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 -- As you can see in `mnist_export.py`, the training and exporting is done the same way it is in the -[TensorFlow Serving basic tutorial](serving_basic). For +[TensorFlow Serving basic tutorial](serving_basic.md). For demonstration purposes, you're intentionally dialing down the training iterations for the first run and exporting it as v1, while training it normally for the second run and exporting it as v2 to the same parent directory -- as we @@ -128,8 +124,8 @@ monitors cloud storage instead of local storage, or you could build a version policy plugin that does version transition in a different way -- in fact, you could even build a custom model plugin that serves non-TensorFlow models. These topics are out of scope for this tutorial, however, you can refer to the -[custom source](custom_source) and -[custom servable](custom_servable) documents for more information. +[custom source](custom_source.md) and +[custom servable](custom_servable.md) documents for more information. ## Batching @@ -232,7 +228,7 @@ To put all these into the context of this tutorial: `DoClassifyInBatch` is then just about requesting `SessionBundle` from the manager and uses it to run inference. Most of the logic and flow is very similar to the logic and flow described in the -[TensorFlow Serving basic tutorial](serving_basic), with just a few +[TensorFlow Serving basic tutorial](serving_basic.md), with just a few key changes: * The input tensor now has its first dimension set to variable batch size at