Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS binding #15

Open
bitchecker opened this issue Sep 22, 2020 · 7 comments
Open

DNS binding #15

bitchecker opened this issue Sep 22, 2020 · 7 comments

Comments

@bitchecker
Copy link

Hi,
I'm trying to run this service but i see that the DNS that will start into the container, it's binded to his internal ip address, so, it's not possible to have a DNS service exposed to the network.

With this limitation, it's not possible to configure a container/virtualization host on network with services published via domain names.

As you can see from netstat, service is binded to internal container ip address:

udp        0      0 $_container_ip_:53            0.0.0.0:*                           1654/conmon

that is not reachable from network, also with publish costraint

I've also tried to not specify binding address, but i get error:

Error: cannot listen on the UDP port: listen udp4 :53: bind: address already in use

because there are other services running that are related to libvirt

@lionelnicolas
Copy link
Owner

This is not intended to be used with --net host. So at you said, the internal dnsmasq is binding port 53 on it's own IP, then you need to expose port 53 using --publish.

But if you already have a DNS service running on your host, the --publish may fail because port is already in use by another DNS service (like the dnsmasq started by libvirt or NetworkManager)

  1. What is your OS ?
  2. Do you have NetworkManager running ?
  3. If you run that command on your host, could you paste the output here ? sudo netstat -plunt | grep -w 53

@bitchecker
Copy link
Author

bitchecker commented Sep 22, 2020

Hi,

  1. I'm running it on Centos 8 with libvirt and podman
  2. yes, NetworkManager is running and is mounted as ro volume on container
  3. this is output of command:
tcp        0      0 $_virbr0_address_:53                        0.0.0.0:*               LISTEN      1090/dnsmasq        
udp        0      0 $_dalidock_container_ip_:53                 0.0.0.0:*                           1199/podman         
udp        0      0 $_virbr0_address_:53                        0.0.0.0:*                           1090/dnsmasq 

@lionelnicolas
Copy link
Owner

I don't understand why podman is listenning on $_dalidock_container_ip_, it should be the IP address of your network bridge. Maybe this is specific to podman.

@bitchecker
Copy link
Author

bitchecker commented Sep 22, 2020

it's the same configuration about docker, also on those container management system, you will get an ip address relative to docker subnet.

If i run

podman run \
  --name dalidock \
  --net host \
  --cap-add NET_ADMIN \
  --publish $_LIBVIRT_HOST_:53:53/udp \
  --publish 80:80 \
  --env DNS_DOMAIN=my.local.env \
  --env LB_DOMAIN=my.local.env \
  --volume /run/NetworkManager:/run/NetworkManager:ro \
  --volume /var/run/libvirt:/var/run/libvirt:ro \
  lionelnicolas/dalidock

and i disable the internal dnmasq used for NAT subnet I can see virtual machine that were detected.

dalidock[14]: [INFO]  wait for domain test QEMU guest agent to reply
dalidock[14]: [INFO]  name=test       hostname=test       ip=None            net=br0        domain=my.local.env use_wildcard=False

At this point, it will be already reachable with $_hostname_.$_domain:, In this case test.my.local.env? Metadata are necessary always or just for setup custom DNS/LB entry?

PS: of course ip=None is just because virtual machine is not completely starterd

@lionelnicolas
Copy link
Owner

lionelnicolas commented Sep 22, 2020

Yes, metadata and labels are only needed for custom DNS/LB.

So your that case, all these commands should return the correct IP:

# using `host` from `bind-utils` package on RedHat-like or `bind9-host` on Debian-like

host test ${LIBVIRT_HOST}

host test.my.local.env ${LIBVIRT_HOST}

Or using dig :

dig @${LIBVIRT_HOST} test.my.local.env

I see that you have defined a qemu guest agent in your VM config (wait for domain test QEMU guest agent to reply). If the guest agent is not running inside the VM, dalidock will timeout when trying to get the IP address. If it timeouts, then no DNS record will be created as there is no known IP to associate. You can customize the timeout by adding :

    --env LIBVIRT_IP_TIMEOUT=120

If you want to make dalidock use libvirt DHCP leases instead of the guest agent, you'll need to remove the org.qemu.guest_agent.0 channel from the VM config.

@bitchecker
Copy link
Author

Yes, I always use qemu guest agent, but virtual machine was a simple test that was booting, so, after that I need to configure it and so on. I think that could be fail while I work to configure it, after a reboot, it will works properly, but timeout option can be very useful.

I will try to configure and running some tests, also changing domain and using LB domain.

@lionelnicolas
Copy link
Owner

Ok !

I'm thinking about adding a fallback to DHCP lease discovery if the guest agent fails, that would help in your case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants