Skip to content

Commit

Permalink
Fix some documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
kirilg committed Feb 12, 2016
1 parent 3985fca commit 19f5356
Show file tree
Hide file tree
Showing 8 changed files with 21 additions and 40 deletions.
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#TensorFlow Serving

TensorFlow Serving is an open-source software library for serving
machine-learned models. It deals with the *inference* aspect of machine
machine learning models. It deals with the *inference* aspect of machine
learning, taking models after *training* and managing their lifetimes, providing
clients with versioned access via a high-performance, reference-counted lookup
table.
Expand All @@ -23,7 +23,7 @@ but at its core it manages arbitrary versioned items (*servables*) with
pass-through to their native APIs. In addition to trained TensorFlow models,
servables can include other assets needed for inference such as embeddings,
vocabularies and feature transformation configs, or even non-TensorFlow-based
machine-learned models.
machine learning models.

The architecture is highly modular. You can use some parts individually (e.g.
batch scheduling) or use all the parts together. There are numerous plug-in
Expand All @@ -41,6 +41,11 @@ tracking requests and bugs.

See [install instructions](tensorflow_serving/g3doc/setup.md).

##Tutorials

* [Basic tutorial](tensorflow_serving/g3doc/serving_basic.md)
* [Advanced tutorial](tensorflow_serving/g3doc/serving_advanced.md)

##For more information

* [Serving architecture overview](tensorflow_serving/g3doc/architecture_overview.md)
Expand Down
12 changes: 4 additions & 8 deletions tensorflow_serving/g3doc/architecture_overview.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Architecture Overview

TensorFlow Serving is a flexible, high-performance serving system for machine
Expand Down Expand Up @@ -102,7 +98,7 @@ versions to the Manager, it supercedes the previous list for that servable
stream. The Manager unloads any previously loaded versions that no longer
appear in the list.

See the [advanced tutorial](serving_advanced) to see how version loading
See the [advanced tutorial](serving_advanced.md) to see how version loading
works in practice.

### Managers
Expand Down Expand Up @@ -200,7 +196,7 @@ easy & fast to create new sources. For example, TensorFlow Serving includes a
utility to wrap polling behavior around a simple source. Sources are closely
related to Loaders for specific algorithms and data hosting servables.

See the [Custom Source](custom_source) document for more about how to create
See the [Custom Source](custom_source.md) document for more about how to create
a custom Source.

### Loaders
Expand All @@ -211,7 +207,7 @@ new Loader in order to load, provide access to, and unload an instance of a
new type of servable machine learning model. We anticipate creating Loaders
for lookup tables and additional algorithms.

See the [Custom Servable](custom_servable) document to learn how to create a
See the [Custom Servable](custom_servable.md) document to learn how to create a
custom servable.

### Batcher
Expand All @@ -226,4 +222,4 @@ process.
## Next Steps

To get started with TensorFlow Serving, try the
[Basic Tutorial](serving_basic).
[Basic Tutorial](serving_basic.md).
6 changes: 1 addition & 5 deletions tensorflow_serving/g3doc/custom_servable.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Creating a new kind of servable

This document explains how to extend TensorFlow Serving with a new kind of
Expand All @@ -26,7 +22,7 @@ define methods for loading, accessing and unloading your type of servable. The
data from which the servable is loaded can come from anywhere, but it is common
for it to come from a storage-system path; let's assume that's the case for
`YourServable`. Let's further assume you already have a `Source<StoragePath>`
that you are happy with (if not, see the [Custom Source](custom_source)
that you are happy with (if not, see the [Custom Source](custom_source.md)
document). In addition to your Loader, you will need to define a `SourceAdapter`
that instantiates a `Loader` from a given storage path. Most simple use-cases
can specify the two objects concisely via the `SimpleLoaderSourceAdapter` class
Expand Down
4 changes: 0 additions & 4 deletions tensorflow_serving/g3doc/custom_source.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Creating a module that discovers new servable paths

This document explains how to extend TensorFlow Serving to monitor different
Expand Down
12 changes: 4 additions & 8 deletions tensorflow_serving/g3doc/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

TensorFlow Serving is a flexible, high-performance serving system for machine
learning models, designed for production environments. TensorFlow Serving
makes it easy to deploy new algorithms and experiments, while keeping the same
Expand All @@ -11,10 +7,10 @@ types of models and data.

To get started with TensorFlow Serving:

* Read the [overview](architecture_overview)
* [Set up](setup) your environment
* Do the [basic tutorial](serving_basic)
* Read the [overview](architecture_overview.md)
* [Set up](setup.md) your environment
* Do the [basic tutorial](serving_basic.md)



![TensorFlow Serving Diagram](images/tf_diagram.svg){: width="500"}
![TensorFlow Serving Diagram](images/tf_diagram.svg){: width="500"}
2 changes: 1 addition & 1 deletion tensorflow_serving/g3doc/serving_advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=100 --e
Train (with 2000 iterations) and export the second version of model:

~~~
$>bazel-bin/tensorflow_serving/example:mnist_export --training_iteration=2000 --export_version=2 /tmp/mnist_model
$>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --export_version=2 /tmp/mnist_model
~~~

As you can see in `mnist_export.py`, the training and exporting is done the
Expand Down
8 changes: 2 additions & 6 deletions tensorflow_serving/g3doc/serving_basic.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Serving a TensorFlow Model

This tutorial shows you how to use TensorFlow Serving components to export a
Expand All @@ -12,7 +8,7 @@ requests), and it calculates an aggregate inference error rate. If you're
already familiar with TensorFlow Serving, and you want to create a more complex
server that handles batched inference requests, and discovers and serves new
versions of a TensorFlow model that is being dynamically updated, see the
[TensorFlow Serving advanced tutorial](serving_advanced).
[TensorFlow Serving advanced tutorial](serving_advanced.md).

This tutorial uses the simple Softmax Regression model introduced in the
TensorFlow tutorial for handwritten image (MNIST data) classification. If you
Expand All @@ -26,7 +22,7 @@ trains and exports the model, and a C++ file
([mnist_inference.cc](https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example/mnist_inference.cc)) that loads the
exported model and runs a [gRPC](http://www.grpc.io) service to serve it.

Before getting started, please complete the [prerequisites](setup#prerequisites).
Before getting started, please complete the [prerequisites](setup.md#prerequisites).

## Train And Export TensorFlow Model

Expand Down
8 changes: 2 additions & 6 deletions tensorflow_serving/g3doc/setup.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
---
---
<style>hr{display:none;}</style>

# Installation

## Prerequisites
Expand Down Expand Up @@ -36,7 +32,7 @@ following steps:

Our tutorials use [gRPC](http://www.grpc.io) (0.13 or higher) as our RPC
framework. You can find the installation instructions
[here](https://github.com/grpc/grpc/issues/5111).
[here](https://github.com/grpc/grpc/tree/master/src/python/grpcio).

### Packages

Expand Down Expand Up @@ -114,5 +110,5 @@ To test your installation, execute:
bazel test tensorflow_serving/...
~~~

See the [basic tutorial](serving_basic) and [advanced tutorial](serving_advanced)
See the [basic tutorial](serving_basic.md) and [advanced tutorial](serving_advanced.md)
for more in-depth examples of running TensorFlow Serving.

0 comments on commit 19f5356

Please sign in to comment.