From 19f5356c108e5331e8ae35a21b335b37b519fa61 Mon Sep 17 00:00:00 2001 From: Kiril Gorovoy Date: Fri, 12 Feb 2016 15:48:10 -0800 Subject: [PATCH] Fix some documentation. --- README.md | 9 +++++++-- tensorflow_serving/g3doc/architecture_overview.md | 12 ++++-------- tensorflow_serving/g3doc/custom_servable.md | 6 +----- tensorflow_serving/g3doc/custom_source.md | 4 ---- tensorflow_serving/g3doc/index.md | 12 ++++-------- tensorflow_serving/g3doc/serving_advanced.md | 2 +- tensorflow_serving/g3doc/serving_basic.md | 8 ++------ tensorflow_serving/g3doc/setup.md | 8 ++------ 8 files changed, 21 insertions(+), 40 deletions(-) diff --git a/README.md b/README.md index 7a5dbf81b1e..e7b44959187 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ #TensorFlow Serving TensorFlow Serving is an open-source software library for serving -machine-learned models. It deals with the *inference* aspect of machine +machine learning models. It deals with the *inference* aspect of machine learning, taking models after *training* and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. @@ -23,7 +23,7 @@ but at its core it manages arbitrary versioned items (*servables*) with pass-through to their native APIs. In addition to trained TensorFlow models, servables can include other assets needed for inference such as embeddings, vocabularies and feature transformation configs, or even non-TensorFlow-based -machine-learned models. +machine learning models. The architecture is highly modular. You can use some parts individually (e.g. batch scheduling) or use all the parts together. There are numerous plug-in @@ -41,6 +41,11 @@ tracking requests and bugs. See [install instructions](tensorflow_serving/g3doc/setup.md). +##Tutorials + +* [Basic tutorial](tensorflow_serving/g3doc/serving_basic.md) +* [Advanced tutorial](tensorflow_serving/g3doc/serving_advanced.md) + ##For more information * [Serving architecture overview](tensorflow_serving/g3doc/architecture_overview.md) diff --git a/tensorflow_serving/g3doc/architecture_overview.md b/tensorflow_serving/g3doc/architecture_overview.md index bb2d5a91f9f..edb9a462ff7 100644 --- a/tensorflow_serving/g3doc/architecture_overview.md +++ b/tensorflow_serving/g3doc/architecture_overview.md @@ -1,7 +1,3 @@ ---- ---- - - # Architecture Overview TensorFlow Serving is a flexible, high-performance serving system for machine @@ -102,7 +98,7 @@ versions to the Manager, it supercedes the previous list for that servable stream. The Manager unloads any previously loaded versions that no longer appear in the list. -See the [advanced tutorial](serving_advanced) to see how version loading +See the [advanced tutorial](serving_advanced.md) to see how version loading works in practice. ### Managers @@ -200,7 +196,7 @@ easy & fast to create new sources. For example, TensorFlow Serving includes a utility to wrap polling behavior around a simple source. Sources are closely related to Loaders for specific algorithms and data hosting servables. -See the [Custom Source](custom_source) document for more about how to create +See the [Custom Source](custom_source.md) document for more about how to create a custom Source. ### Loaders @@ -211,7 +207,7 @@ new Loader in order to load, provide access to, and unload an instance of a new type of servable machine learning model. We anticipate creating Loaders for lookup tables and additional algorithms. -See the [Custom Servable](custom_servable) document to learn how to create a +See the [Custom Servable](custom_servable.md) document to learn how to create a custom servable. ### Batcher @@ -226,4 +222,4 @@ process. ## Next Steps To get started with TensorFlow Serving, try the -[Basic Tutorial](serving_basic). +[Basic Tutorial](serving_basic.md). diff --git a/tensorflow_serving/g3doc/custom_servable.md b/tensorflow_serving/g3doc/custom_servable.md index 478e95a7c08..15e5b447c5f 100644 --- a/tensorflow_serving/g3doc/custom_servable.md +++ b/tensorflow_serving/g3doc/custom_servable.md @@ -1,7 +1,3 @@ ---- ---- - - # Creating a new kind of servable This document explains how to extend TensorFlow Serving with a new kind of @@ -26,7 +22,7 @@ define methods for loading, accessing and unloading your type of servable. The data from which the servable is loaded can come from anywhere, but it is common for it to come from a storage-system path; let's assume that's the case for `YourServable`. Let's further assume you already have a `Source` -that you are happy with (if not, see the [Custom Source](custom_source) +that you are happy with (if not, see the [Custom Source](custom_source.md) document). In addition to your Loader, you will need to define a `SourceAdapter` that instantiates a `Loader` from a given storage path. Most simple use-cases can specify the two objects concisely via the `SimpleLoaderSourceAdapter` class diff --git a/tensorflow_serving/g3doc/custom_source.md b/tensorflow_serving/g3doc/custom_source.md index 569776d2b8d..a6b8492b696 100644 --- a/tensorflow_serving/g3doc/custom_source.md +++ b/tensorflow_serving/g3doc/custom_source.md @@ -1,7 +1,3 @@ ---- ---- - - # Creating a module that discovers new servable paths This document explains how to extend TensorFlow Serving to monitor different diff --git a/tensorflow_serving/g3doc/index.md b/tensorflow_serving/g3doc/index.md index 326b4693b83..d1869e0072f 100644 --- a/tensorflow_serving/g3doc/index.md +++ b/tensorflow_serving/g3doc/index.md @@ -1,7 +1,3 @@ ---- ---- - - TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same @@ -11,10 +7,10 @@ types of models and data. To get started with TensorFlow Serving: -* Read the [overview](architecture_overview) -* [Set up](setup) your environment -* Do the [basic tutorial](serving_basic) +* Read the [overview](architecture_overview.md) +* [Set up](setup.md) your environment +* Do the [basic tutorial](serving_basic.md) -![TensorFlow Serving Diagram](images/tf_diagram.svg){: width="500"} \ No newline at end of file +![TensorFlow Serving Diagram](images/tf_diagram.svg){: width="500"} diff --git a/tensorflow_serving/g3doc/serving_advanced.md b/tensorflow_serving/g3doc/serving_advanced.md index fc805593fba..32d660a44cb 100644 --- a/tensorflow_serving/g3doc/serving_advanced.md +++ b/tensorflow_serving/g3doc/serving_advanced.md @@ -53,7 +53,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=100 --e Train (with 2000 iterations) and export the second version of model: ~~~ -$>bazel-bin/tensorflow_serving/example:mnist_export --training_iteration=2000 --export_version=2 /tmp/mnist_model +$>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --export_version=2 /tmp/mnist_model ~~~ As you can see in `mnist_export.py`, the training and exporting is done the diff --git a/tensorflow_serving/g3doc/serving_basic.md b/tensorflow_serving/g3doc/serving_basic.md index 27c30bfeaef..37c2d1d3619 100644 --- a/tensorflow_serving/g3doc/serving_basic.md +++ b/tensorflow_serving/g3doc/serving_basic.md @@ -1,7 +1,3 @@ ---- ---- - - # Serving a TensorFlow Model This tutorial shows you how to use TensorFlow Serving components to export a @@ -12,7 +8,7 @@ requests), and it calculates an aggregate inference error rate. If you're already familiar with TensorFlow Serving, and you want to create a more complex server that handles batched inference requests, and discovers and serves new versions of a TensorFlow model that is being dynamically updated, see the -[TensorFlow Serving advanced tutorial](serving_advanced). +[TensorFlow Serving advanced tutorial](serving_advanced.md). This tutorial uses the simple Softmax Regression model introduced in the TensorFlow tutorial for handwritten image (MNIST data) classification. If you @@ -26,7 +22,7 @@ trains and exports the model, and a C++ file ([mnist_inference.cc](https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example/mnist_inference.cc)) that loads the exported model and runs a [gRPC](http://www.grpc.io) service to serve it. -Before getting started, please complete the [prerequisites](setup#prerequisites). +Before getting started, please complete the [prerequisites](setup.md#prerequisites). ## Train And Export TensorFlow Model diff --git a/tensorflow_serving/g3doc/setup.md b/tensorflow_serving/g3doc/setup.md index 6ac4876f7b6..0f3039e294b 100644 --- a/tensorflow_serving/g3doc/setup.md +++ b/tensorflow_serving/g3doc/setup.md @@ -1,7 +1,3 @@ ---- ---- - - # Installation ## Prerequisites @@ -36,7 +32,7 @@ following steps: Our tutorials use [gRPC](http://www.grpc.io) (0.13 or higher) as our RPC framework. You can find the installation instructions -[here](https://github.com/grpc/grpc/issues/5111). +[here](https://github.com/grpc/grpc/tree/master/src/python/grpcio). ### Packages @@ -114,5 +110,5 @@ To test your installation, execute: bazel test tensorflow_serving/... ~~~ -See the [basic tutorial](serving_basic) and [advanced tutorial](serving_advanced) +See the [basic tutorial](serving_basic.md) and [advanced tutorial](serving_advanced.md) for more in-depth examples of running TensorFlow Serving.