Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Caching problems with newer providers #10

Merged
merged 3 commits into from
Jul 16, 2023

Conversation

LenardHess
Copy link
Collaborator

@LenardHess LenardHess commented Jul 10, 2023

This fixes the caching issues #3 of new providers such as Cloudflare not respecting the cache and updating at the daemon interval.

With this change we now have the following rules for providers (that want caching).
Old provider implementations shall:

  • Read out $config{$host}{'wantip'} and process it.
  • Set $config{$host}{'status'} to the result

New provider implementations shall:

  • Read out $config{$host}{'wantipv4'} and process it.
  • Set $config{$host}{'status-ipv4'} to the result:
  • Read out $config{$host}{'wantipv6'} and process it.
  • Set $config{$host}{'status-ipv6'} to the result
    For all cases, the 'status*' result shall be set to "good" on success, a readable error otherwise - i.e. "bad" or "failed".
  • Existing providers may still set status, but any modified/newly added ones shouldn't. With this fix, we now take care of setting it based on status-ipv* if its not set for backwards compatibility.

The configuration shall either:

  • Contain the use parameter
    In this case, the provider functions will be called with wantip set, and (depending on the IP address type) either wantipv4 or wantipv6 set.
  • Contain one or both of: usev4, usev6
    In this case, the provider functions will be called with wantipv4 and/or wantipv6 set. Additionally for backwards compatibility, wantip will be set to the value of wantipv4 (if usev4 is present and valid).

Outstanding tasks:

  • Document provider rules for nic_*_update() functions
  • Document 'use*' legacy vs current behavior
  • Review providers for correct behavior against rules
  • Investigate whether easydns & porkbun are regressions (Fixed by updating them to new provider rules)
Current provider behavior
  • dyndns1
    delete wantip
    set status

  • dyndns2
    delete wantipv*
    set status, status-ipv4, status-ipv6

  • dnsexit2
    delete wantip
    set status

  • noip
    delete wantip
    set status

  • dslreports1
    delete wantip
    set status

  • domeneshop
    delete wantip
    set status

  • zoneedit1
    delete wantip
    set status

  • easydns
    delete wantipv*
    set status (not status-ipv4/status-ipv6 !!!)
    EDIT: Updated to set status-ipv4/status-ipv6 instead of status

  • namecheap
    delete wantip
    set status

  • nfsn
    delete wantip
    set status

  • njalla
    delete wantipv*
    does not set status, mtime etc. (!!!)

  • sitelutions
    delete wantip
    set status

  • freedns
    delete wantipv*
    set status-ipv*

  • 1984
    delete wantip
    does not set status, mtime etc. (!!!)

  • changeip
    delete wantip
    set status

  • godaddy
    delete wantip
    set "status"

  • googledomains
    delete wantip
    set "status"

  • mythicdyn
    checks ipv6 (not wantipv6 !!!) only for v4 vs v6, service uses connection to determine IP
    set "status"

  • nsupdate
    delete wantip
    set ip, mtime, but not status

  • cloudflare
    delete wantipv*
    set status-ipv*

  • hetzner
    delete wantipv*
    set status-ipv*

  • yandex
    delete wantip
    set status

  • duckdns
    delete wantip
    set status

  • freemyip
    delete wantip
    set status

  • woima
    delete wantip
    set status

  • dondominio
    delete wantip
    set status

  • dnsmadeeasy
    delete wantip
    set status

  • ovh
    delete wantip
    set status

  • porkbun
    delete wantipv*
    set status (not status-ipv4/status-ipv6 !!!)
    EDIT: Updated to set status-ipv4/status-ipv6 instead of status

  • cloudns
    delete wantip
    set status

  • dinahosting
    delete wantip
    set status

  • gandi
    delete wantip
    set status

  • keysystems
    delete wantip
    set status

  • regfishde
    delete wantip (twice, ipv6 looks broken!!!)
    set status

  • enom
    delete wantip
    set status

  • digitalocean
    delete wantipv*
    set status-ipv*

  • infomaniak
    delete wantipv*
    set status
    set status-ipv*

@rrthomas
Copy link
Owner

Nice simple fix!

@LenardHess
Copy link
Collaborator Author

Thanks, the notes weren't as simple 😄

My first draft of how the functions should behave is below. Note that they mostly don't and that this only covers the wantip* and status* fields so far.

#  The update function performs the actual record update.
#  It receives an array of hosts as its argument.
#  For each host it shall perform the following:
#  - If $config{$host}{'wantipv4'} is set, try updating the host
#    record's IPv4 address. If the update succeeded, set
#    $config{$host}{'status-ipv4'} to "good", else set it to "bad"
#    or "failed" or any other expressive error.
#  - If $config{$host}{'wantipv6'} is set, try updating the host
#    record's IPv6 address. If the update succeeded, set
#    $config{$host}{'status-ipv6'} to "good", else set it to "bad"
#    or "failed" or any other expressive error.

@LenardHess
Copy link
Collaborator Author

LenardHess commented Jul 10, 2023

Going through the provider list, there's a few anomalies:

  • easydns, porkbun use wantipv* but set status - not sure yet how that'll behave with the cache
  • 1984 and njalla do not set status, mtime etc. - so it probably will always update every interval instead of utilizing the cache
  • mythicdyn checks ipv6 instead of wantipv6, and also doesn't use the IP ddclient discovered. The service endpoint instead uses the IP of the connection to be written into DNS record. I wouldn't be surprised if there's some issues with this with regards to caching
  • nsupdate does not set status, so cache is probably broken for a while now
  • regfishde seems to have broken ipv6 support thanks to deleting wantip twice

@LenardHess LenardHess linked an issue Jul 10, 2023 that may be closed by this pull request
4 tasks
@rrthomas
Copy link
Owner

Going through the provider list, there's a few anomalies:

Good analysis, thanks! I'll just comment on the one I know about, as I wrote it:

* mythicdyn checks `ipv6` instead of `wantipv6`, and also doesn't use the IP ddclient discovered. The service endpoint instead uses the IP of the connection to be written into DNS record. I wouldn't be surprised if there's some issues with this with regards to caching

Indeed, with mythicdyn there's no way to supply an IP address to the protocol. Since there are no rate limits with Mythic, there seems little point worrying about caching; I concluded that it would be pointless to go to the lengths required to implement caching.

@LenardHess
Copy link
Collaborator Author

snip
Since there are no rate limits with Mythic, there seems little point worrying about caching; I concluded that it would be pointless to go to the lengths required to implement caching.

Now that we've looked deeper into the caching, I would opt to still implement it as part of future cleanups. That keeps the providers more similar and is friendlier towards Mythic.

@LenardHess LenardHess marked this pull request as ready for review July 13, 2023 20:32
@LenardHess
Copy link
Collaborator Author

LenardHess commented Jul 13, 2023

@rrthomas All done from my end - might be good if you could review if there's anything I've missed.

I'll try to do this tomorrow.

@rrthomas
Copy link
Owner

snip
Since there are no rate limits with Mythic, there seems little point worrying about caching; I concluded that it would be pointless to go to the lengths required to implement caching.

Now that we've looked deeper into the caching, I would opt to still implement it as part of future cleanups. That keeps the providers more similar and is friendlier towards Mythic.

Mythic explicitly say they don't care how often you ping their DNS API; I think once every five minutes will be OK under any circumstances. I thought about this quite hard when I implemented the code, and it's quite hard to be absolutely sure what address they used without querying their system after the update to find out what address they have for you, which doesn't really fit the structure of ddclient; and much harder to program correctly.

@LenardHess
Copy link
Collaborator Author

Current caching logic

ddclient cache

The bug with cloudflare and friends meant that if someone configured use, the caching code would always conclude to update at the "Enough time passed since last attempt?".
This happened since status was missing, which meant it checked atime against min-error-interval, with the latter being a default of 5 minutes.

@rrthomas
Copy link
Owner

Wow, amazing word @LenardHess!

@rrthomas
Copy link
Owner

Looks good to me, thanks! I made one minor point about English in a comment.

@LenardHess
Copy link
Collaborator Author

Looks good to me, thanks! I made one minor point about English in a comment.

I don't see a comment anywhere?

Copy link
Owner

@rrthomas rrthomas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just made one small comment.

ddclient.in Outdated Show resolved Hide resolved
This fixes caching issues when using the 'usev4' or 'usev6' parameters.
Without this, the "min-interval" and "warned-min-interval" limits will
not work.

For the legacy 'use' parameter, the wrapping code takes care of
translating 'status-ipv*' to 'status'.
@tamino202
Copy link

How is the mtime in the cache determined? Is that different per protocol? Because for me that mtime is always 0. I am using the dyndns2 protocol with a custom server provider(strato). That causes ddclient to always try to update because we are over the max-interval.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cache issues in v3.10
3 participants