Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ channels:
- conda-forge
dependencies:
- python=3.10.16
- notebook==6.4.12
- jupyter_contrib_nbextensions
- jupyterhub
- jupyter-book
Expand Down
2 changes: 1 addition & 1 deletion part1_getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -399,7 +399,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
7 changes: 5 additions & 2 deletions part2_advanced_config.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,9 @@
"metadata": {},
"source": [
"## Customize\n",
"Let's just try setting the precision of the first layer weights to something more narrow than 16 bits. Using fewer bits can save resources in the FPGA. After inspecting the profiling plot above, let's try 8 bits with 1 integer bit.\n",
"Let's just try setting the precision of the first layer weights to something more narrow than 16 bits. Using fewer bits can save resources in the FPGA. After inspecting the profiling plot above, let's try 8 bits with 2 integer bit.\n",
"\n",
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results. \n",
"\n",
"Then create a new `HLSModel`, and display the profiling with the new config. This time, just display the weight profile by not providing any data '`X`'. Then create the `HLSModel` and display the architecture. Notice the box around the weights of the first layer reflects the different precision."
]
Expand All @@ -160,6 +162,7 @@
"outputs": [],
"source": [
"config['LayerName']['fc1']['Precision']['weight'] = 'ap_fixed<8,2>'\n",
"config['LayerName']['output']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
"hls_model = hls4ml.converters.convert_from_keras_model(\n",
" model, hls_config=config, output_dir='model_1/hls4ml_prj_2', part='xcu250-figd2104-2L-e'\n",
")\n",
Expand Down Expand Up @@ -395,7 +398,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion part3_compression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion part4.1_HG_quantization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -474,7 +474,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion part4_quantization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
20 changes: 12 additions & 8 deletions part6_cnns.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,9 @@
"\n",
"![alt text](images/conv2d_animation.gif \"The implementation of convolutional layers in hls4ml.\")\n",
"\n",
"Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below."
"Lastly, we will use ``['Strategy'] = 'Latency'`` for all the layers in the hls4ml configuration. If one layer would have >4096 elements, we sould set ``['Strategy'] = 'Resource'`` for that layer, or increase the reuse factor by hand. You can find examples of how to do this below.\n",
"\n",
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results.\n"
]
},
{
Expand All @@ -674,7 +676,7 @@
"hls_config = hls4ml.utils.config_from_keras_model(\n",
" model, granularity='name', backend='Vitis', default_precision='ap_fixed<16,6>'\n",
")\n",
"\n",
"hls_config['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
"plotting.print_dict(hls_config)\n",
"\n",
"\n",
Expand Down Expand Up @@ -721,12 +723,13 @@
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false
},
"metadata": {},
"source": [
"The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for all layers of the model. Let's now build our QKeras model"
"The colored boxes are the distribution of the weights of the model, and the gray band illustrates the numerical range covered by the chosen fixed point precision. As we configured, this model uses a precision of ``ap_fixed<16,6>`` for the weights and biases of all layers of the model. \n",
"\n",
"Let's now build our QKeras model. \n",
"\n",
"**NOTE** Using `auto` precision can lead to undesired side effects. In case of this model, the bit width used for the output of the last fully connected layer is larger than can be reasonably represented with the look-up table in the softmax implementation. We therefore need to restrict it by hand to achieve proper results."
]
},
{
Expand All @@ -737,6 +740,7 @@
"source": [
"# Then the QKeras model\n",
"hls_config_q = hls4ml.utils.config_from_keras_model(qmodel, granularity='name', backend='Vitis')\n",
"hls_config_q['LayerName']['output_dense']['Precision']['result'] = 'fixed<16,6,RND,SAT>'\n",
"\n",
"plotting.print_dict(hls_config_q)\n",
"\n",
Expand Down Expand Up @@ -1315,7 +1319,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion part7a_bitstream.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion part8_symbolic_regression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
Loading