Skip to content

Commit

Permalink
One more minor documentation fix.
Browse files Browse the repository at this point in the history
  • Loading branch information
kirilg committed Feb 13, 2016
1 parent 19f5356 commit 31784e7
Showing 1 changed file with 6 additions and 10 deletions.
16 changes: 6 additions & 10 deletions tensorflow_serving/g3doc/serving_advanced.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Serving Dynamically Updated TensorFlow Model with Batching

This tutorial shows you how to use TensorFlow Serving components to build a
Expand All @@ -10,7 +6,7 @@ TensorFlow model. You'll also learn how to use TensorFlow Serving
batcher to do batched inference. The code examples in this tutorial focus on the
discovery, batching, and serving logic. If you just want to use TensorFlow
Serving to serve a single version model without batching, see
[TensorFlow Serving basic tutorial](serving_basic).
[TensorFlow Serving basic tutorial](serving_basic.md).

This tutorial uses the simple Softmax Regression model introduced in the
TensorFlow tutorial for handwritten image (MNIST data) classification. If you
Expand All @@ -33,7 +29,7 @@ This tutorial steps through the following tasks:
4. Serve request with TensorFlow Serving manager.
5. Run and test the service.

Before getting started, please complete the [prerequisites](setup#prerequisites).
Before getting started, please complete the [prerequisites](setup.md#prerequisites).

## Train And Export TensorFlow Model

Expand All @@ -58,7 +54,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --

As you can see in `mnist_export.py`, the training and exporting is done the
same way it is in the
[TensorFlow Serving basic tutorial](serving_basic). For
[TensorFlow Serving basic tutorial](serving_basic.md). For
demonstration purposes, you're intentionally dialing down the training
iterations for the first run and exporting it as v1, while training it normally
for the second run and exporting it as v2 to the same parent directory -- as we
Expand Down Expand Up @@ -128,8 +124,8 @@ monitors cloud storage instead of local storage, or you could build a version
policy plugin that does version transition in a different way -- in fact, you
could even build a custom model plugin that serves non-TensorFlow models. These
topics are out of scope for this tutorial, however, you can refer to the
[custom source](custom_source) and
[custom servable](custom_servable) documents for more information.
[custom source](custom_source.md) and
[custom servable](custom_servable.md) documents for more information.
## Batching
Expand Down Expand Up @@ -232,7 +228,7 @@ To put all these into the context of this tutorial:
`DoClassifyInBatch` is then just about requesting `SessionBundle` from the
manager and uses it to run inference. Most of the logic and flow is very similar
to the logic and flow described in the
[TensorFlow Serving basic tutorial](serving_basic), with just a few
[TensorFlow Serving basic tutorial](serving_basic.md), with just a few
key changes:
* The input tensor now has its first dimension set to variable batch size at
Expand Down

0 comments on commit 31784e7

Please sign in to comment.