The goal of this project is to investigate, compare, and document the practical differences between TensorFlow’s eager execution and graph execution (@tf.function
), focusing on:
- Numerical correctness between modes.
- Execution time trade-offs for small and repeated operations.
- Behavior of Python control flow inside graphs.
- Issues with Python side-effects (e.g., list mutation) and safe graph-friendly alternatives.
- Variable mutation and in-place updates in
@tf.function
.
The notebook takes a hands-on experimental approach, using small, controlled code snippets to explore specific behaviors:
- Basic numerical test – Implement a simple function (matrix multiplication + bias addition) and run in both eager and graph modes.
- Control flow with AutoGraph – Demonstrate how Python
if/else
is converted into graph operations and inspect the generated graph definition. - Micro-benchmarking – Use
timeit
to measure performance differences for single vs repeated calls. - Model tracing – Build a minimal Keras model and examine graph tracing.
- Side-effects – Show that Python list appends behave unexpectedly inside graphs, then introduce the safe pattern using
tf.TensorArray
andtf.range
. - Variable mutation – Demonstrate correct in-graph state updates using
.assign
andtf.tensor_scatter_nd_update
.
From imports and usage:
- TensorFlow (
tensorflow
) – eager execution,tf.function
,TensorArray
, variables, and tensor ops. - Keras (via TensorFlow) –
Input
,Model
,Conv2D
,Dense
,Flatten
. - NumPy – tensor creation and manipulation.
- timeit – performance measurement.
- (Optional) ipdb – debugging (not required to run the notebook).
Not provided – all examples use synthetic/randomly generated tensors; no external dataset is loaded.
Requirements:
pip install tensorflow numpy
(Optional)
pip install ipdb
Run the notebook:
jupyter notebook eager_vs_graph_me.ipynb
or in JupyterLab:
jupyter lab eager_vs_graph_me.ipynb
Execute cells sequentially to reproduce experiments and outputs.
Both modes produce identical results for the test function:
Eager function value:
[[22.7]]
Graph mode function value:
[[22.7]]
Branching with Python if/else
is correctly lowered into graph ops:
first_branch:1
sec_branch:0
The generated graph can be printed via:
tf_simple_relu.get_concrete_function(tf.constant(1)).graph.as_graph_def()
Single call:
egar_function time 0.0099053
graph_finction time 0.0362043
10,000 calls:
egar_function time 0.2091298
graph_finction time 1.2625766
(Values are from the recorded run; actual times vary by hardware and environment.)
Appending to a Python list inside @tf.function
captures tf.Tensor
objects, not eager values.
Safe pattern with tf.TensorArray
:
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([2, 4, 6])>
Demonstrates mutation with .assign
and tf.tensor_scatter_nd_update
:
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 5, 5])>
tf.Tensor([50 4 6], shape=(3,), dtype=int32)
(Extracted directly from notebook outputs)
Eager vs Graph Equality
Eager function value:
[[22.7]]
Graph mode function value:
[[22.7]]
Performance – 10k calls
egar_function time 0.2091298
graph_finction time 1.2625766
TensorArray Output
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([2, 4, 6])>
- Eager mode is often faster for small, repeated functions in interactive contexts due to reduced tracing overhead.
- Graph mode is valuable for complex computations, deployment, or hardware optimization, but may introduce pitfalls with Python side-effects.
- AutoGraph enables Pythonic control flow in graphs but requires care with data types and shapes.
- State changes in graph mode must be expressed using TensorFlow ops (
assign
,TensorArray
) rather than Python constructs.
💡 Some interactive outputs (e.g., plots, widgets) may not display correctly on GitHub. If so, please view this notebook via nbviewer.org for full rendering.
Mehran Asgari Email: imehranasgari@gmail.com GitHub: https://github.com/imehranasgari
This project is licensed under the Apache 2.0 License – see the LICENSE
file for details.