Skip to content

Conversation

@nPraml
Copy link
Collaborator

@nPraml nPraml commented Mar 18, 2025

Hello @rbygrave ,

Similar to what was described in Issue #97, we have a multi-tenant application running on 6 Tomcats. Each tenant has its own DB connection string, its own DB user, and password (for regulatory reasons), meaning each tenant has its own connection pool.

We often have the problem of running out of connections (with DB2, around 3,000 connections, after which the DB throws an exception). The 6 Tomcats with 100 tenants reach the 3,000 limit quite quickly (~5 connections per Tomcat per tenant).

We want to proactively prevent DB exceptions by having the application server check how many maximum connections it is allowed to establish (in our case, 500 DB connections per Tomcat) and distribute this maximum connection to the individual connection pools.

In this PR, we have implemented a proposed solution:

  • We have created new methods in the listener to monitor the number of DB connections.
  • The listener then randomly distributes the available connections to the respective connection pools.

We wrote the test against MariaDB because it allows a maximum of 150 DB connections.

Could you please give us feedback?

We could then also split the PR into smaller PRs (e.g., new trim method and listeners).

Cheers
Noemi

executor.shutdownNow();
}

static class PoolManager implements DataSourcePoolListener {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is the simplified model of our listener in our application (in the case of DB2 it would allow 500 connections)

DataSourcePool pool2 = getPool();

try {
consumeConnections(pool1, 100);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the test there are 2 clients with 100 requested connections each

try {
while (!semaphore.tryAcquire(50, TimeUnit.MILLISECONDS)) {
System.out.println("trim required");
pools.get(random.nextInt(pools.size())).forceTrim(25);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are no more free connections, the tenant must wait and 25 connections are released from a random pool (tenant)

@rbygrave
Copy link
Member

feedback?

Looks good - I'm happy with it.

We could then also split the PR into smaller PRs

No, leave it as 1 PR I think would be better. The only issue here is there is a merge conflict.

/**
* Called before a connection has been created
*/
default void onBeforeCreateConnection() {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should pass the pool as parameter here.

@nPraml nPraml force-pushed the max-connection-with-listener branch 3 times, most recently from 0b352a1 to f75abd1 Compare March 19, 2025 14:38
ADD: forceTrim

zwischencommit

test erweitert + fix

revert logback

revert

pool as parameter

fix test
@nPraml nPraml force-pushed the max-connection-with-listener branch from f75abd1 to 67bba14 Compare March 19, 2025 14:42
@nPraml
Copy link
Collaborator Author

nPraml commented Mar 19, 2025

@rPraml I have extended the parameter list to include pool

@rbygrave I have rebased the PR

@rPraml
Copy link
Collaborator

rPraml commented Mar 19, 2025

Please wait with merge. I would like to do more tests and a critical review in our team

@rPraml
Copy link
Collaborator

rPraml commented Mar 26, 2025

The problem with the "stealing" approach is, that we run into deadlocks.

When pool1 wants that pool2 trims some connections and pool2 wants that pool1 trims some connections we have the dead lock.

So we do not want to pursue this approach (nevertheless, we might add the additional listener events if you think this is useful)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants