Skip to content

Commit 8958b56

Browse files
Eyal-Danieliguy1992lyonishelach
authored
[onnx_utils] Upgrade versions and fixed tests (#853) (#857)
* [Build] Fix html links, Add <function>.html as source in documentation * Update CI temporarily and update index * [XGB-Custom] Fix test artifact key name * [XGB-Serving][XGB-Test][XGB-Trainer] Fix tests - artifact key * [Build] Install python 3.9 when testing (#618) * [Build] Update python version in CI (#620) * [Build] Install python 3.9 when testing * [Build] Update python version in CI * . * Revert "[Build] Update python version in CI (#620)" (#621) This reverts commit 0cd1f15. * Revert "[Build] Install python 3.9 when testing (#618)" (#619) This reverts commit 3301415. * [Build] Build with python 3.9 (#622) * [Build] Build with python 3.9 * . * upgraded version and corrected testt to work locally * fixed notebook * again * increase mlrun version * updated item --------- Co-authored-by: guy1992l <83535508+guy1992l@users.noreply.github.com> Co-authored-by: yonishelach <yonatanshelach@gmail.com> Co-authored-by: Yoni Shelach <92271540+yonishelach@users.noreply.github.com>
1 parent d1098de commit 8958b56

File tree

6 files changed

+172
-174
lines changed

6 files changed

+172
-174
lines changed

onnx_utils/function.yaml

Lines changed: 53 additions & 59 deletions
Large diffs are not rendered by default.

onnx_utils/item.yaml

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ labels:
1212
author: guyl
1313
maintainers: []
1414
marketplaceType: ''
15-
mlrunVersion: 1.1.0
15+
mlrunVersion: 1.7.2
1616
name: onnx_utils
1717
platformVersion: 3.5.0
1818
spec:
@@ -26,10 +26,16 @@ spec:
2626
image: mlrun/mlrun
2727
kind: job
2828
requirements:
29-
- onnx~=1.13.0
30-
- onnxruntime~=1.14.0
31-
- onnxoptimizer~=0.3.0
32-
- onnxmltools~=1.11.0
33-
- tf2onnx~=1.13.0
29+
- tqdm~=4.67.1
30+
- tensorflow~=2.19.0
31+
- tf_keras~=2.19.0
32+
- torch~=2.6.0
33+
- torchvision~=0.21.0
34+
- onnx~=1.17.0
35+
- onnxruntime~=1.19.2
36+
- onnxoptimizer~=0.3.13
37+
- onnxmltools~=1.13.0
38+
- tf2onnx~=1.16.1
39+
- plotly~=5.4.0
3440
url: ''
35-
version: 1.2.0
41+
version: 1.3.0

onnx_utils/onnx_utils.ipynb

Lines changed: 34 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -84,15 +84,14 @@
8484
},
8585
{
8686
"cell_type": "code",
87-
"execution_count": null,
8887
"metadata": {
8988
"pycharm": {
9089
"name": "#%%\n"
9190
}
9291
},
93-
"outputs": [],
9492
"source": [
9593
"import os\n",
94+
"os.environ[\"TF_USE_LEGACY_KERAS\"] = \"true\"\n",
9695
"from tempfile import TemporaryDirectory\n",
9796
"\n",
9897
"# Create a temporary directory for the model artifact:\n",
@@ -107,7 +106,9 @@
107106
"\n",
108107
"# Choose our optimized ONNX version model's name:\n",
109108
"OPTIMIZED_ONNX_MODEL_NAME = \"optimized_onnx_mobilenetv2\""
110-
]
109+
],
110+
"outputs": [],
111+
"execution_count": null
111112
},
112113
{
113114
"cell_type": "markdown",
@@ -122,22 +123,20 @@
122123
},
123124
{
124125
"cell_type": "code",
125-
"execution_count": null,
126126
"metadata": {
127127
"pycharm": {
128128
"name": "#%%\n"
129129
}
130130
},
131-
"outputs": [],
132131
"source": [
133132
"# mlrun: start-code"
134-
]
133+
],
134+
"outputs": [],
135+
"execution_count": null
135136
},
136137
{
137138
"cell_type": "code",
138-
"execution_count": null,
139139
"metadata": {},
140-
"outputs": [],
141140
"source": [
142141
"from tensorflow import keras\n",
143142
"\n",
@@ -158,20 +157,22 @@
158157
"\n",
159158
" # Log the model:\n",
160159
" model_handler.log()"
161-
]
160+
],
161+
"outputs": [],
162+
"execution_count": null
162163
},
163164
{
164165
"cell_type": "code",
165-
"execution_count": null,
166166
"metadata": {
167167
"pycharm": {
168168
"name": "#%%\n"
169169
}
170170
},
171-
"outputs": [],
172171
"source": [
173172
"# mlrun: end-code"
174-
]
173+
],
174+
"outputs": [],
175+
"execution_count": null
175176
},
176177
{
177178
"cell_type": "markdown",
@@ -186,13 +187,11 @@
186187
},
187188
{
188189
"cell_type": "code",
189-
"execution_count": null,
190190
"metadata": {
191191
"pycharm": {
192192
"name": "#%%\n"
193193
}
194194
},
195-
"outputs": [],
196195
"source": [
197196
"import mlrun\n",
198197
"\n",
@@ -213,7 +212,9 @@
213212
" },\n",
214213
" local=True\n",
215214
")"
216-
]
215+
],
216+
"outputs": [],
217+
"execution_count": null
217218
},
218219
{
219220
"cell_type": "markdown",
@@ -224,13 +225,11 @@
224225
},
225226
{
226227
"cell_type": "code",
227-
"execution_count": null,
228228
"metadata": {
229229
"pycharm": {
230230
"name": "#%%\n"
231231
}
232232
},
233-
"outputs": [],
234233
"source": [
235234
"# Import the ONNX function from the marketplace:\n",
236235
"onnx_utils_function = mlrun.import_function(\"hub://onnx_utils\")\n",
@@ -247,7 +246,9 @@
247246
" },\n",
248247
" local=True\n",
249248
")"
250-
]
249+
],
250+
"outputs": [],
251+
"execution_count": null
251252
},
252253
{
253254
"cell_type": "markdown",
@@ -258,19 +259,19 @@
258259
},
259260
{
260261
"cell_type": "code",
261-
"execution_count": null,
262262
"metadata": {
263263
"pycharm": {
264264
"name": "#%%\n"
265265
}
266266
},
267-
"outputs": [],
268267
"source": [
269268
"import os\n",
270269
"\n",
271270
"\n",
272271
"print(os.listdir(ARTIFACT_PATH))"
273-
]
272+
],
273+
"outputs": [],
274+
"execution_count": null
274275
},
275276
{
276277
"cell_type": "markdown",
@@ -304,13 +305,11 @@
304305
},
305306
{
306307
"cell_type": "code",
307-
"execution_count": null,
308308
"metadata": {
309309
"pycharm": {
310310
"name": "#%%\n"
311311
}
312312
},
313-
"outputs": [],
314313
"source": [
315314
"onnx_utils_function.run(\n",
316315
" handler=\"optimize\",\n",
@@ -322,7 +321,9 @@
322321
" },\n",
323322
" local=True\n",
324323
")"
325-
]
324+
],
325+
"outputs": [],
326+
"execution_count": null
326327
},
327328
{
328329
"cell_type": "markdown",
@@ -333,16 +334,16 @@
333334
},
334335
{
335336
"cell_type": "code",
336-
"execution_count": null,
337337
"metadata": {
338338
"pycharm": {
339339
"name": "#%%\n"
340340
}
341341
},
342-
"outputs": [],
343342
"source": [
344343
"print(os.listdir(ARTIFACT_PATH))"
345-
]
344+
],
345+
"outputs": [],
346+
"execution_count": null
346347
},
347348
{
348349
"cell_type": "markdown",
@@ -357,24 +358,24 @@
357358
},
358359
{
359360
"cell_type": "code",
360-
"execution_count": null,
361361
"metadata": {
362362
"pycharm": {
363363
"name": "#%%\n"
364364
}
365365
},
366-
"outputs": [],
367366
"source": [
368367
"import shutil\n",
369368
"\n",
370369
"\n",
371370
"shutil.rmtree(ARTIFACT_PATH)"
372-
]
371+
],
372+
"outputs": [],
373+
"execution_count": null
373374
}
374375
],
375376
"metadata": {
376377
"kernelspec": {
377-
"display_name": "Python 3",
378+
"display_name": "Python 3 (ipykernel)",
378379
"language": "python",
379380
"name": "python3"
380381
},
@@ -388,9 +389,9 @@
388389
"name": "python",
389390
"nbconvert_exporter": "python",
390391
"pygments_lexer": "ipython3",
391-
"version": "3.7.10"
392+
"version": "3.9.22"
392393
}
393394
},
394395
"nbformat": 4,
395396
"nbformat_minor": 4
396-
}
397+
}

onnx_utils/onnx_utils.py

Lines changed: 20 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ def pytorch_to_onnx(
146146
input_layers_names=input_layers_names,
147147
output_layers_names=output_layers_names,
148148
dynamic_axes=dynamic_axes,
149-
is_batched=is_batched
149+
is_batched=is_batched,
150150
)
151151

152152

@@ -160,26 +160,31 @@ def pytorch_to_onnx(
160160
def to_onnx(
161161
context: mlrun.MLClientCtx,
162162
model_path: str,
163+
load_model_kwargs: dict = None,
163164
onnx_model_name: str = None,
164165
optimize_model: bool = True,
165166
framework_kwargs: Dict[str, Any] = None,
166167
):
167168
"""
168169
Convert the given model to an ONNX model.
169170
170-
:param context: The MLRun function execution context
171-
:param model_path: The model path store object.
172-
:param onnx_model_name: The name to use to log the converted ONNX model. If not given, the given `model_name` will
173-
be used with an additional suffix `_onnx`. Defaulted to None.
174-
:param optimize_model: Whether to optimize the ONNX model using 'onnxoptimizer' before saving the model. Defaulted
175-
to True.
176-
:param framework_kwargs: Additional arguments each framework may require in order to convert to ONNX. To get the doc
177-
string of the desired framework onnx conversion function, pass "help".
171+
:param context: The MLRun function execution context
172+
:param model_path: The model path store object.
173+
:param load_model_kwargs: Keyword arguments to pass to the `AutoMLRun.load_model` method.
174+
:param onnx_model_name: The name to use to log the converted ONNX model. If not given, the given `model_name` will
175+
be used with an additional suffix `_onnx`. Defaulted to None.
176+
:param optimize_model: Whether to optimize the ONNX model using 'onnxoptimizer' before saving the model.
177+
Defaulted to True.
178+
:param framework_kwargs: Additional arguments each framework may require to convert to ONNX. To get the doc string
179+
of the desired framework onnx conversion function, pass "help".
178180
"""
179181
from mlrun.frameworks.auto_mlrun.auto_mlrun import AutoMLRun
180182

181183
# Get a model handler of the required framework:
182-
model_handler = AutoMLRun.load_model(model_path=model_path, context=context)
184+
load_model_kwargs = load_model_kwargs or {}
185+
model_handler = AutoMLRun.load_model(
186+
model_path=model_path, context=context, **load_model_kwargs
187+
)
183188

184189
# Get the model's framework:
185190
framework = model_handler.FRAMEWORK_NAME
@@ -219,6 +224,7 @@ def to_onnx(
219224
def optimize(
220225
context: mlrun.MLClientCtx,
221226
model_path: str,
227+
handler_init_kwargs: dict = None,
222228
optimizations: List[str] = None,
223229
fixed_point: bool = False,
224230
optimized_model_name: str = None,
@@ -228,8 +234,9 @@ def optimize(
228234
229235
:param context: The MLRun function execution context.
230236
:param model_path: Path to the ONNX model object.
237+
:param handler_init_kwargs: Keyword arguments to pass to the `ONNXModelHandler` init method preloading.
231238
:param optimizations: List of possible optimizations. To see what optimizations are available, pass "help".
232-
If None, all of the optimizations will be used. Defaulted to None.
239+
If None, all the optimizations will be used. Defaulted to None.
233240
:param fixed_point: Optimize the weights using fixed point. Defaulted to False.
234241
:param optimized_model_name: The name of the optimized model. If None, the original model will be overridden.
235242
Defaulted to None.
@@ -245,8 +252,9 @@ def optimize(
245252
return
246253

247254
# Create the model handler:
255+
handler_init_kwargs = handler_init_kwargs or {}
248256
model_handler = ONNXModelHandler(
249-
model_path=model_path, context=context
257+
model_path=model_path, context=context, **handler_init_kwargs
250258
)
251259

252260
# Load the ONNX model:

onnx_utils/requirements.txt

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
tqdm~=4.62.3
2-
tensorflow~=2.7.0
3-
torch~=1.10.0
4-
torchvision~=0.11.1
5-
onnx~=1.10.1
6-
onnxruntime~=1.8.1
7-
onnxoptimizer~=0.2.0
8-
onnxmltools~=1.9.0
9-
tf2onnx~=1.9.0
1+
tqdm~=4.67.1
2+
tensorflow~=2.19.0
3+
tf_keras~=2.19.0
4+
torch~=2.6.0
5+
torchvision~=0.21.0
6+
onnx~=1.17.0
7+
onnxruntime~=1.19.2
8+
onnxoptimizer~=0.3.13
9+
onnxmltools~=1.13.0
10+
tf2onnx~=1.16.1
1011
plotly~=5.4.0
11-
wrapt<1.15.0 # wrapt==1.15.0 fails tensorflow 2.7 Todo: please remove when updating tensorflow

0 commit comments

Comments
 (0)