From 5037e396eb7d41f508191215163030bec6e226e8 Mon Sep 17 00:00:00 2001 From: prathameshk54 Date: Sun, 6 Mar 2022 18:44:03 +0530 Subject: [PATCH 1/2] Updated README.md --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3dbbd33..f3cb77e 100644 --- a/README.md +++ b/README.md @@ -1 +1,4 @@ -"# Tensorflow-Bootcamp" +"# Tensorflow-Bootcamp" +Contains all the notebook from the Udemy Course by Jose Portilla: +Complete Guide to TensorFlow for Deep Learning with Python + From 05b584ff7b5fe94085a24957fb294222e96948f9 Mon Sep 17 00:00:00 2001 From: prathameshk54 Date: Sat, 12 Mar 2022 17:15:52 +0530 Subject: [PATCH 2/2] added my notes --- tf_learn | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 tf_learn diff --git a/tf_learn b/tf_learn new file mode 100644 index 0000000..e397bda --- /dev/null +++ b/tf_learn @@ -0,0 +1,16 @@ +* Operation Overloading is done for tensor ops (nodes) +* Keras is in tf2.0 +* Eager execution vs Graph execution (Need of eager exec after pytorch) +** Method to convert eager to graph -> graph_func = tf.Function(eager_func) +* tf.compat.v1 to use function / classes from tf1.0 in tf2.0 +* tf.compat.v1.train.optimizer.minimize(loss) +** loss is the tensor, which has connections to y, y_pred in graph, thus to all the weights in the model +** from the graph, tf knows the weights which needs to be adjusted using gradent descent, or you can also give the list of parameters +** first step is to calculate all the gradients (using tf.gradienttape); tape.gradient(y, x) gives dy/dx, we use tape.gradient(loss, params) +** second step is to apply these gradients (optimizer.apply_gradient(zip(grad, params))) to adjust these params / weights +* tf.Estimator - high level api, provides functions for train, eval, prdeict +** you can also build custom estimator from the keras model usning tf.keras.estimator.model_to_estimator (check once) +* tf.cast(np_array) +* tf.nn.sigmoid +* tf +