-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Corrupt jdbc connection because of integrated connection pooler (c3p0) #3539
Comments
@mattkaem any thoughts? |
I believe that https://www.mchange.com/projects/c3p0/#configuring_connection_testing could provide some insights. I am not sure, though, if Hono currently uses connection checking but it looks like this could already be the solution to the problem. |
I think currently we do not check the status of a connection before using it but, looking at the documentation of c3p0, it seems there is a way to do so (https://www.mchange.com/projects/c3p0/#configuring_connection_testing). |
@nupis-DanielS would you like to give adding support for enabling connection checking a shot? @mattkaem and I can support if necessary |
Sure, i will give it a try ;-) |
Switching to Quarkus Agroal datasource can also make a difference [#3562] and harden it connections are dropped from the pool. |
@nupis-DanielS with #3562 having been merged, is this still an issue? |
Hi, thank you for your development. Is there a possiblity to get a pre relase image of the version 2.5.0. We can only reproduce it in our cluster which uses the hono helm chart. |
We updated the hono to version 2.5.0 and it looks good. No bad connections so far. thanks :-) |
Hi,
we use Eclipse Hono (jdbc device registry) in a Kubernetes environment with PostgreSql. When the postgres deployment restarts the now corrupt connection is still being used and is not closed by hono. That leads to an error for the first transaction after the restart.
Is there a possibility to use a new connection for every transaction or to disable the integrated c3p0 connection pool?
The text was updated successfully, but these errors were encountered: