Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restructure discovery to do more up front #315

Merged
merged 6 commits into from
Apr 25, 2024

Conversation

dedelala
Copy link

Description

  • move organization of target hosts to parse time so we do it less often
  • pass affinityValue in at run time instead of in the connection attributes
  • rework discovery metrics
  • add discovery data to the /debug/status endpoint


for poolType := range targets {
if b.affinityField != "" {
sort.Slice(targets[poolType], func(i, j int) bool {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to sort here to pull the same-AZ hosts to the front of the list, but I don't think we want to slice out the ones in the other AZ.

If we don't have numConnections hosts in the same AZ, I think we still want the discovery to try to find that many valid targets, even if that means crossing AZs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're not dropping any targets here, only sorting

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤦 Oh duh of course, my bad

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should actually get coffee before reviewing code next time.

return b.affinityValue == targets[poolType][i].Affinity
})
}
if len(targets[poolType]) > *numConnections {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much simpler than my unnecessary min() function.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could use the built in min if we weren't on go 1.18 😢

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know!!

Copy link
Collaborator

@demmer demmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Nice rework!

Copy link
Collaborator

@demmer demmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I just realized... we need some synchronization between the /debug/status page and the new connections / parser updating.

@demmer
Copy link
Collaborator

demmer commented Apr 25, 2024

Expanding that a bit, I think this is real close but we need some locking to make this safe, since we have a race between new inbound mysql connections, the parser detecting changes to the file, and the debug page scraping the list.

I also think it'd be a nice little win if the debug page also did the AZ affinity sorting that the actual connection logic is going to do.

So I think the solution would be:

  • Add an RWLock and lock it when done parsing and replacing the b.targets
  • Refactor to add a b.getTargetsForPool(poolType string) method which takes the read lock and returns a copy of the slice for the given target, randomly shuffled
  • Implement a b.getAllPools which takes the read lock and returns the set of pools.

Then the debug page can use those two helper methods which I think means we'll get a thread safe debug view that would randomly shuffle sort each time like the actual connection code does.

Copy link
Collaborator

@demmer demmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall PR looks great!

I know it's a bit odd to approve ones own code, but Esme did most of the work here and LGTM!

@dedelala dedelala merged commit eefd8b0 into vtgateproxy Apr 25, 2024
1 check passed
@dedelala dedelala deleted the esme-vtgateproxy-moar-parse branch April 25, 2024 01:14
dedelala added a commit that referenced this pull request Jul 30, 2024
* move organization of target hosts to parse time

* rework metrics and logging of parse errors

* add discovery bits to debug status page

* reset parseErr in the right place

* add sync and change debug page to do the shuffle

* unrelated but just move some code around

---------

Co-authored-by: Michael Demmer <mdemmer@slack-corp.com>
dedelala added a commit that referenced this pull request Nov 12, 2024
* move organization of target hosts to parse time

* rework metrics and logging of parse errors

* add discovery bits to debug status page

* reset parseErr in the right place

* add sync and change debug page to do the shuffle

* unrelated but just move some code around

---------

Co-authored-by: Michael Demmer <mdemmer@slack-corp.com>
dedelala added a commit that referenced this pull request Jan 8, 2025
* move organization of target hosts to parse time

* rework metrics and logging of parse errors

* add discovery bits to debug status page

* reset parseErr in the right place

* add sync and change debug page to do the shuffle

* unrelated but just move some code around

---------

Co-authored-by: Michael Demmer <mdemmer@slack-corp.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants