-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add initial sources #1
Conversation
94acf85
to
6d0cca7
Compare
.kres.yaml
Outdated
@@ -2,6 +2,7 @@ | |||
kind: common.Image | |||
name: image-omni-cloud-provider-qemu | |||
spec: | |||
baseImage: ghcr.io/siderolabs/talosctl:v1.7.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This provisioner needs talosctl
, as the provisioning code imported from Talos requires it to spawn more processes over it. The talosctl
binary is searched in the working directory and in the PATH by this provisioner. When we use it as the base image, it is found in the current working directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we can just use a copy stage and copy over from the talosctl image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I look into that, but it looked like I can copy over from other images (in the same dockerfile), but not from an arbitrary image. Might be wrong though.
Also, does this have any downsides? There is nothing in the talosctl base image other than talosctl.
message QemuMachineSpec { | ||
string uuid = 1; | ||
string disk_path = 2; | ||
} | ||
|
||
message QemuMachineAllocationSpec { | ||
string talos_version = 1; | ||
string schematic_id = 2; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the resources internal to this provider, living in the cloud-provider:qemu
namespace of Omni's state.
rootCmd.Flags().StringSliceVar(&rootCmdArgs.nameservers, "nameservers", []string{"1.1.1.1", "1.0.0.1"}, "the nameservers to use for the QEMU VMs.") | ||
rootCmd.Flags().StringVar(&rootCmdArgs.imageFactoryPXEURL, "image-factory-pxe-url", "https://pxe.factory.talos.dev", "the URL of the image factory PXE server.") | ||
rootCmd.Flags().IntVar(&rootCmdArgs.ipxeServerPort, "ipxe-server-port", 42420, "the port of the iPXE server.") | ||
rootCmd.Flags().IntVar(&rootCmdArgs.numMachines, "num-machines", 8, "the number of machines to provision.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the total pool of machines the provider provisions on startup. They go into an ipxe boot loop until they are assigned to a MachineRequest
.
ec60dda
to
9537f35
Compare
43f29aa
to
b937774
Compare
func (provider *Provider) watchSysVersion(ctx context.Context, st state.State) error { | ||
eventCh := make(chan state.Event) | ||
|
||
if err := st.Watch(ctx, system.NewSysVersion(omniresources.EphemeralNamespace, system.SysVersionID).Metadata(), eventCh); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should add a proper health endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that's a good idea. Here the problem I work around is, when Omni gets restarted, the state
I have over the client API gets broken, but the runtime doesn't become aware of it. I couldn't find a way to make it aware of it (using GRPC options etc.). Maybe there is one, but I couldn't find any.
So not sure if it's an Omni problem - maybe a COSI one?
Holding this until the Omni side gets merged |
The initial implementation. Signed-off-by: Utku Ozdemir <utku.ozdemir@siderolabs.com>
b937774
to
335731d
Compare
/m |
The initial implementation.
Depends on: siderolabs/omni#449