Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tools/onnx-subgraph] add multi subgraphs inference code #14769

Merged
merged 3 commits into from
Mar 5, 2025

Conversation

chenyx113
Copy link
Contributor

related issue: #14534
historical full changes PR: #14613

add code for multi sub graphs inference

ONE-DCO-1.0-Signed-off-by: Youxin Chen yx113.chen@samsung.com

add code for multi subgraphs inference

ONE-DCO-1.0-Signed-off-by: Youxin Chen <yx113.chen@samsung.com>
@chenyx113 chenyx113 marked this pull request as ready for review March 4, 2025 13:08
@@ -116,7 +163,16 @@ def prepare_initial_input_data(onnx_model_path, default_input_data):
"x": np.random.rand(1, 3, 256, 256).astype(np.float32),
}
initial_input_data = prepare_initial_input_data(args.single, default_input_data)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz remove this line, it doesn't seem to be related with the change.

model_input_data = {name: input_data[name] for name in input_names}
outputs = session.run(None, model_input_data)
current_model_outputs = dict(zip(output_names, outputs))
if output_names_to_collect is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can hoist this if line above for loop.
when collected_outputs is None, return is {}.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated change doesn't relfect may comment correctly.
please read again.

# Perform inference using multiple split subgraph models
output_multiple = model_inference.infer_multiple_onnx_models(
initial_input_data, output_names_list)
print("Multiple subgraph inference completed!")
Copy link
Contributor

@seanshpark seanshpark Mar 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q) is infer_single_onnx_model for the "source" onnx model and infer_multiple_onnx_models for our "target" splitted multiple models ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we use this code to verify the splitted models inference result, then compare the output data and evaluate the accuracy with the "source" onnx model, there will be comparaing code in next PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, yes, I move the parameter exception checking out of loop now, thank you :)

update code as the review comment
move parameter exception checking out of loop
Copy link
Contributor

@seanshpark seanshpark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@seanshpark seanshpark merged commit 935afd2 into Samsung:master Mar 5, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants