Skip to content

Conversation

@andreasgraber
Copy link

This PR enables configuration of trunk ports in one run. It is based on the already existing module aci_bulk_static_binding_to_epg.py. But in addition to that, the Goal is to push a list of EPGs to a list of interfaces.

Some additional notes:

  • The state: absent call is reporting always the changed = True. This is also the case aci_bulk_static_binding_to_epg. I guess this is due to some issues in the aci.py class (Recursion Stuff). Maybe needs to be addressed in a separate PR?
  • If an EPG is not existing in ACI, the APIC automatically creates it during the execution of this bulk change. This is also the case with aci_bulk_static_binding_to_epg. I guess that's ok.
  • If the Code is fine for You I can also add some integration tests for it.

Thanks for a review!

@akinross
Copy link
Collaborator

akinross commented Dec 5, 2025

Hi @andreasgraber,

Thank you for creation of the PR, we will have an internal discussion regarding your proposal.

A few questions already from my end:

  1. What is your main goal of the resource? Performance/speed enhancements?
  2. What is your concern with using the current module in a loop or in parallel task execution per EPG?
  3. Have you considered the size of payload? I think this way the chance gets bigger that the payload becomes to big for APIC to process and we would need to look into splitting the payload in multiple requests.
  4. Did you consider making adjustments to the current bulk module to allow multiple EPG inputs?
  5. I see you are preventing attribute adds to the polUni class in payload function, but this would prevent ansible to add those additional attributes to the class. I do not think this is the correct approach to simply ignore.

@andreasgraber
Copy link
Author

Hi @akinross,

Thanks for the quick reply!

  1. What is your main goal of the resource? Performance/speed enhancements?

In the end it is an performance Issue. We have an IaC Stack in Front of the Playbook where we configure Servers in ACI. If we have a Server with a few Interfaces and each Interface have 300-400 Bindings the playbook takes over 30min with looping over the module aci_static_binding_to_epg. We also tried some performance improvements described in https://galaxy.ansible.com/ui/repo/published/cisco/aci/docs/optimizing/ but we could not get it faster.

  1. What is your concern with using the current module in a loop or in parallel task execution per EPG?

I haven't tried it with parallel Execution. But I think than we need to handle Rate limiting Issues from the APIC as there are still hundreds of API Calls to configure one interface.

  1. Have you considered the size of payload? I think this way the chance gets bigger that the payload becomes to big for APIC to process and we would need to look into splitting the payload in multiple requests.

That's a good Point! My Experience with the APIC is that they can deal with very big payloads, but if somebody is pushing a big number of interfaces at once it could get problematic. To address this we could also limit the amount of interfaces under the attribute interface_configs or remove the list completely and allow just one interface at a time to be configured?

  1. Did you consider making adjustments to the current bulk module to allow multiple EPG inputs?

The current Bulk Module is addressing the usecase to push an EPG to many Interfaces. We need the use case to configure many EPGs to an interface at once, like a "switchport trunk allowed vlan" command on the catalyst switches. I think it could get hard to implement this logic in the current module? But maybe the naming of our module is confusing with the existing once?

  1. I see you are preventing attribute adds to the polUni class in payload function, but this would prevent ansible to add those additional attributes to the class. I do not think this is the correct approach to simply ignore.

I just saw that the /uni object gets the ansible annotation after running our bulk change module. This feels wrong to me as this object is managed by aci itself and we just want to manage the RsPathAtt Object. But we also could remove this part?

Thanks for checking it internally and just let me know if You have any further question.

Once we know if we can proceed with the PR I'm going to fix the the pipeline issues.

Br,
andi

@akinross
Copy link
Collaborator

akinross commented Dec 8, 2025

Hi @andreasgraber,

Regarding your point 1, why would you refer to aci_static_binding_to_epg module and not the bulk equivalent, this would reduce the amount of tasks. In this case you would either loop per EPG or execute per EPG in parallel but having an inventory per EPG. What amount of EPGs are you configuring?

Regarding your point 2, this would depend on the amount of EPG as asked in point 1.

Regarding your point 4, the module is configuring a N amount of fvRsPathAtt MO to an EPG. I think this module could also take in a list of EPG to which you want to configure this same fvRsPathAtt MOs. As far as I am aware this module was made for performance optimisation of the single aci_static_binding_to_epg module. I personally think this module could be further adjusted, also to limit the amount of different modules which are sort of doing similar configuration. This would be a discussion with internal team.

Regarding your point 5, this is wrong and happens automatically in the payload function because the assumption previously is made that there are no modules that are directly manipulation polUni. So likely in your situation each object that gets created should have this attribute added individually in my opinion.

If you want to have some additional discussion on point 1 parallel execution we could set up a call to discuss your options there. Feel free to email me directly to schedule something.

@dariomi
Copy link

dariomi commented Dec 8, 2025

Hi @akinross
This is the use-case we discussed at cisco live emea this year.
Let me do an example to make things a bit more clear: We have one (1) VPC interface, towards virtualization platform for example, that needs many (200+) static bindings.
Using the bulk_ module does not improve the performance for us since we have 1 interface and 200 static bindings so we still have to loop 200 epg's which takes quite a while to execute.
We could start to do parrallel task execution but its still a big amount of api calls that apic needs to handle compared with the suggested module/improvement.

We would be open to do some work on the current bulk_ module but I see an issue with the datastructure that would need some change. We would need:

  • list of epgs instead of epg
  • encap_id per epg instead of one global "encap_id"
  • per leaf encap_id overwrite per epg instead of one "encap_id"

@akinross
Copy link
Collaborator

akinross commented Dec 8, 2025

Hi @dariomi, thanks for the clarifications. Will discuss with the team and get back to you on this.

@akinross
Copy link
Collaborator

Hi @andreasgraber and @dariomi,

I had an internal discussion and we have agreed that we will be willing to add this functionality. I will support you initially with writing the module, but after would need to follow the regular review process we have in our team.

I would need some more time to do a proper review of the code and look at various options we have / choices we would need to consider. Are there any deadlines on your side for this?

@andreasgraber
Copy link
Author

Hi @akinross,

That are great news! Thx for the Information and Your support!
If we could ship the feature mid/end of january 2026 that would be great.

You can also ping us on webex to proceed with the PR.

Br,
andi

@akinross akinross added jira-sync Sync this issue to Jira enhancement New feature or request labels Dec 12, 2025
@github-actions github-actions bot changed the title feat: Added aci_bulk_static_bindings_to_epgs module feat: Added aci_bulk_static_bindings_to_epgs module (DCNE-622) Dec 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request jira-sync Sync this issue to Jira

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants