-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for open source EDA tools from containers #221
Conversation
This PR adds suppor for running open source EDA tools directly from containers. By default, tools are run from local host but added a configurable parameter `use_containers` that enable running from containers. There is also a parameter `container_daemon` allowing the user to set another daemon such as Podman. The configuration comes from a centralized makefile that is imported from other backends overriding the current defined tools by container counterparts. Adjusted the backends to use this makefile and call them via variables.
Thanks for this. I know too little about containers myself to feel comfortable in reviewing this alone, but I have some questions
|
Let's close this and focus on adding support for supplying a run command instead |
This is possible, but non-idiomatic in the context of containers. Having all the tools available in a single environment is the approach of a Virtual Machine. Containers are more lightweight and tool specific. See https://hdl.github.io/containers/#_usage and librecores/docker-images#33.
The overhead of starting multiple containers is negiglible compared to the download time of a single huge image vs small tool specific images. All the images in hdl/containers share the same base. Therefore, when retrieving multiple images, you only need to get the unique layers, not the full size of the image. Nevertheless, we also provide images which include multiple tools. edalize should work there already, because in that use case edalize is unaware of containers.
Supporting a prefix to the commands is equivalent to supporting the commands to have non-standard names or to be located in absolute locations. "Run this inside this container" is in some sense equivalent to "use this specific binary". The added complexity in the case of containers is binding the locations. In other words, since Yosys supports custom command prefixes, users will need a prefix for using antmicro-yosys, quicklogic-yosys or mycustom-yosys, isn't it?
I think the container names should have default values but be customizable. I believe that @carlosedp's implementation is based on similar makefile based solutions:
For reference, PyFPGA supports specifying images and/or commands through a YAML file: https://github.com/PyFPGA/pyfpga/blob/main/examples/configs.yml. /cc @rodrigomelo9
If you mean #218, I feel it is not a replacement for this feature. Managing the directories and sharing them with the container is not something to be done manually... I agree that this PR could be reworked to make profit from a prefix feature. But I would recommend not to discard it. |
Hi @umarcor. Thanks for the additional info. I don't understand though why #218 isn't enough to solve this? The idea would be that instead of edalize calling a tool directly, it would send the command-line to a script that creates the full command-line. A simple script could look like this #!/usr/bin/python
import subprocess
import sys
print(sys.argv)
if sys.argv[1] == 'yosys':
subprocess.call(['docker', 'run', 'hdlc/yosys']+ sys.argv[1:])
else:
subprocess.call(sys.argv[1:]) I think it would make total sense to make a script that reads @rodrigomelo9's And to support the cases in #218, a script could e.g. call if sys.argv[1] == 'xrun':
subprocess.call(['nc', 'run', '-C', 'ncsim', '-Il',]+sys.argv[1:]) In my view, that would be the most flexible option. Or do I miss anything? |
@olofk, your view is correct, and that's why I suggested that this PR can be reworked after #218 is merged. However, you are missing the fact that containers are primarily meant for isolation (not sandboxing tho). Therefore, by default no folder from the host is available inside the containers. That is, Hence, the directory or list of directories need to be explicitly bind: |
Absolutely. I would love to see PyFPGA and edalize slowly integrating into each other. PyFPGA has already dealt with some of this container and directory issues, but the implementation is not complete yet. Some workflows are supported but others are untested. I believe it makes sense to make the container configuration part reusable by different project/task management tools. |
Hi. I am totally open to discuss and redefine config.yaml if needed. I am creating a separated tool to solve the FOSS flow of PyFPGA (https://github.com/PyFPGA/openflow, totally WIP) where tools like Yosys, GHDL, ghdl-yosys-plugin, nextpnr (ice40 and ecp5), icestorm and prjtrellis are combined. The containers configurations will be performed there, and it will be heritaged by PyFPGA. So, is a great moment to define how the containers and the tools names will be specified :-D |
Great! Seems like we're all on the same page here. I get the complexity of sharing files between the source and hosts, but this seems to be something we can't easily solve in a central way and it has to be left to the users. For FuseSoC users (where Edalize is used as a backend) all files will be available in a clean build directory, so in that case it would be easy to share with the containers. For people using Edalize directly we have no clue where they keep their files so I guess the users must be able to specify that somewhere. But from the perspective of Edalize I think it's pretty simple by now if we just support an external runner and handle all the complexity inside of that. And this seems like a good thing to share between pyfpga and Edalize (and HDLMake @garcialasheras?). I saw @rodrigomelo9's proposal for a simplified config structure and that looks fine to me. So yes, let's rework this PR but instead of calling a container daemon, it will call a user-defined run command. In addition to that I would like to see something like https://github.com/PyFPGA/pyfpga/blob/main/fpga/tool/openflow.py#L56 split out to a separate script that constructs a command-line from a combination of configs.yml and a user-supplied command-line |
I will work on that. I am thinking in a class ( |
Ok, so with f8b3f66 I would argue we have what we need for a container workflow. Thanks for kicking this off, but I think we can close this one now |
Sure, that was a nice solution. |
@carlosedp I think we can ship an |
This PR adds suppor for running open source EDA tools directly from
containers. By default, tools are run from local host but added a
configurable parameter
use_containers
that enable running fromcontainers.
There is also a parameter
container_daemon
allowing the user to setanother daemon such as Podman.
The configuration comes from a centralized makefile that is imported
from other backends overriding the current defined tools by container
counterparts.
Adjusted the backends to use this makefile and call them via variables.