Skip to content
This repository was archived by the owner on May 20, 2020. It is now read-only.

Conversation

@henrypinkard
Copy link
Contributor

@henrypinkard henrypinkard commented Dec 2, 2019

I think these commits that aren't my username were because I forgot to change the default email in git on the VM template we used.

@henrypinkard henrypinkard requested a review from kinshuk December 2, 2019 20:50
downloadDir = 'XXX/orig'
archive_storage_bucket = "fuego-firecam-a"

server_ip_and_port = 'localhost:8500' #depends on the specific inference server running on GCP
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"localhost" means do inference locally on same machine. Can also be an IP address to an inference server

dataset_dir, split_name, shard_id, numShards)

with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer:
with tf.io.TFRecordWriter(output_filename) as tfrecord_writer:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't change functionality, just switching from deprecated interface

Copy link
Member

@kinshuk kinshuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious to know how the model produced by this compares with our current inception model.

import datetime
import math

import docker
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the standard license and copyright header at the top of the file. Also, please add one or few lines comment describing the main purpose of this file.

Comment on lines +36 to +37
# tf.app.flags.DEFINE_string('server', server_ip_and_port, 'PredictionService host:port')
# channel = grpc.insecure_channel(tf.app.flags.FLAGS.server)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these two lines be deleted? If not, please describe when/how they are needed/useful.

# tf.app.flags.DEFINE_string('server', server_ip_and_port, 'PredictionService host:port')
# channel = grpc.insecure_channel(tf.app.flags.FLAGS.server)
channel = grpc.insecure_channel(server_ip_and_port)
# grpc.secure_channel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

secure_channel definitely sounds better than insecure_channel. Please document what's preventing using secure version.

Comment on lines +8 to +15
docker pull tensorflow/serving:latest-gpu
sudo docker run -d --name serving_base tensorflow/serving:latest-gpu
# create intermediate dir
sudo docker exec serving_base mkdir -p /models/$NAME
sudo docker cp $MODEL_PATH serving_base:/models/$NAME/1
sudo docker commit --change "ENV MODEL_NAME $NAME" serving_base $NAME"_serving"
sudo docker kill serving_base
sudo docker rm serving_base
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this possible from Dockerfile? If so, what's the tradeoff of creating the image using this script vs. Dockerfile?

@@ -0,0 +1,77 @@
import glob
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the standard license and copyright header at the top of the file. Also, please add one or few lines comment describing the main purpose of this file.

Comment on lines +41 to +43
optArgs = [
["t", "trainPercentage", "percentage of data to use for training vs. validation (default 90)"]
]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code doesn't use this argument, so you can just remove this and use an empty list.

Comment on lines +51 to +53
val_steps = 100 #only needed for now because of a bug in tf2.0, which should be fixed in next version
#TODO: either set this to # of validation examples /batch size (i.e. figure out num validation examples)
#or upgrade to TF2.1 when its ready and automatically go thorugh the whole set
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please reference the bug number/URL in the comment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants