-
Notifications
You must be signed in to change notification settings - Fork 2
Add env qa command group
#159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
73cdda0 to
deb7722
Compare
3019566 to
b8c8b58
Compare
KevinFairise2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No big blocker to have it merged as is on my end. Just left some comments to raise few points that we might need to update at some point when we want to support more complex environments.
But that looks great!
| return self.state_dir / "agent_config" | ||
|
|
||
| @cached_property | ||
| def agent_config(self) -> AgentConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we will need a way to make the AgentConfig dependent on the Environment.
Depending on the way the agent is installed and the environment it can be quite different than just a datadog.yaml file.
Docker -> Environment variable list
K8s + Helm -> Values.yaml
K8s + operator -> DatadogAgent crd
| """ | ||
|
|
||
| @abstractmethod | ||
| def run_command(self, command: list[str]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think that method will make sense on every QA environment. For VM yes, for Docker probably also, this is more complicated with Kubernetes/ECS environment, as running a command is not something explicit, we could decide to run the command in a random pod where the agent is running but that seems a bit too magic.
| env_vars = agent_config_to_env_vars(agent_config) | ||
| env_vars.update(self.config.env) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need that? Shouldn't we just mount the config file that the agent use?
Or just have a AgentConfigTemplate that reflect the way the Agent running in docker should be configured, i.e directly with environment variables? To me it sounds acceptable that when you are running the agent in Docker you configure it the Docker way (through environment variable) instead of doing a mapping of the config file to environment variables on our end.
That will not work that way with Kubernetes based environment where the agent configuration is done with Helm values
The next PR will add usage documentation to the Agent repo's site.
A good way to test would be to
dda env qa startto start thelinux-containerQA environmentdda env qa run agent statusto confirm it's workingdda env qa config findto locate thedatadog.yamlfile and modify its contents in some waydatadog.yamlfile (also can open directly in your file manager via thedda env qa config explorecommand) add anintegrationsdirectory with a subdirectorypostgrescontaining aconf.yamlfile (or any file ending in.yaml/.yml) with valid contents like:dda env qa config showto confirm your changes appear correctdda env qa config syncto update the configuration used by the Agentdda env qa run agent statusto confirm that thepostgresintegration is now runningdda env qa stop -rto stop and remove the QA environment (or the more involveddda env qa stopfollowed bydda env qa remove)