-
Notifications
You must be signed in to change notification settings - Fork 4
Max DB-connection with listener #119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| executor.shutdownNow(); | ||
| } | ||
|
|
||
| static class PoolManager implements DataSourcePoolListener { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is the simplified model of our listener in our application (in the case of DB2 it would allow 500 connections)
| DataSourcePool pool2 = getPool(); | ||
|
|
||
| try { | ||
| consumeConnections(pool1, 100); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the test there are 2 clients with 100 requested connections each
| try { | ||
| while (!semaphore.tryAcquire(50, TimeUnit.MILLISECONDS)) { | ||
| System.out.println("trim required"); | ||
| pools.get(random.nextInt(pools.size())).forceTrim(25); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are no more free connections, the tenant must wait and 25 connections are released from a random pool (tenant)
Looks good - I'm happy with it.
No, leave it as 1 PR I think would be better. The only issue here is there is a merge conflict. |
| /** | ||
| * Called before a connection has been created | ||
| */ | ||
| default void onBeforeCreateConnection() {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should pass the pool as parameter here.
0b352a1 to
f75abd1
Compare
ADD: forceTrim zwischencommit test erweitert + fix revert logback revert pool as parameter fix test
f75abd1 to
67bba14
Compare
|
Please wait with merge. I would like to do more tests and a critical review in our team |
|
The problem with the "stealing" approach is, that we run into deadlocks. When pool1 wants that pool2 trims some connections and pool2 wants that pool1 trims some connections we have the dead lock. So we do not want to pursue this approach (nevertheless, we might add the additional listener events if you think this is useful) |
Hello @rbygrave ,
Similar to what was described in Issue #97, we have a multi-tenant application running on 6 Tomcats. Each tenant has its own DB connection string, its own DB user, and password (for regulatory reasons), meaning each tenant has its own connection pool.
We often have the problem of running out of connections (with DB2, around 3,000 connections, after which the DB throws an exception). The 6 Tomcats with 100 tenants reach the 3,000 limit quite quickly (~5 connections per Tomcat per tenant).
We want to proactively prevent DB exceptions by having the application server check how many maximum connections it is allowed to establish (in our case, 500 DB connections per Tomcat) and distribute this maximum connection to the individual connection pools.
In this PR, we have implemented a proposed solution:
We wrote the test against MariaDB because it allows a maximum of 150 DB connections.
Could you please give us feedback?
We could then also split the PR into smaller PRs (e.g., new trim method and listeners).
Cheers
Noemi