-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: one (*sql.DB) pool for all jobsdb #5170
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5170 +/- ##
==========================================
+ Coverage 72.91% 73.15% +0.24%
==========================================
Files 439 439
Lines 51134 51223 +89
==========================================
+ Hits 37282 37473 +191
+ Misses 11390 11312 -78
+ Partials 2462 2438 -24 ☔ View full report in Codecov by Sentry. |
// Not deferring close here | ||
// defer dbPool.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What the reason for not closing here? Shall we inject dbPool to embeddedAppHandler. So we always use on connection pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to keep the existing behaviour. We don't close gateway db pool, but the others are closed right away. Since we're sharing the pool now, didn't want to close it.
Just left this comment to find out why the gateway pool wasn't being closed earlier because it's closed in gateway app handler anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the way things are, injecting a pool into app
s would be a good idea. This would effectively enforce the same database for all components. But I think it'd be nice to have the option of passing different connections pools to different components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the existing behaviour you are referring to? I see that every component was closing itself, why have we removed that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we close all *sql.DB
except gateway db. gateway*jobsdb.Handle
is only Stop
ped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in any case, closing it everywhere
looks like there was no reason after all
Co-authored-by: Leonidas Vrachnis <leo.al.vra@gmail.com>
4d6cbab
to
12df2fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Sidddddarth can you add more details in description on why we are going with single db connection pool?
Can we introduce a feature flag to control whether we want a shared connection pool or not? Once this PR gets merged we will have no way to switch to the previous behaviour, but to revert. I would suggest passing nil db connection, as a signal to components that they need to create there own connections |
utils/misc/dbutils.go
Outdated
updatePoolConfig(db.SetMaxOpenConns, &maxConns, maxConnsVar) | ||
updatePoolConfig(db.SetConnMaxIdleTime, &maxIdleTime, maxIdleTimeVar) | ||
updatePoolConfig(db.SetMaxIdleConns, &maxIdleConns, maxIdleConnsVar) | ||
updatePoolConfig(db.SetConnMaxLifetime, &maxConnLifetime, maxConnLifetimeVar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not move this below <-ticker.C
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no reason
just used to writing it like this
utils/misc/dbutils.go
Outdated
return db, nil | ||
} | ||
|
||
func updatePoolConfig[T comparable](f func(T), current *T, conf config.ValueLoader[T]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func updatePoolConfig[T comparable](f func(T), current *T, conf config.ValueLoader[T]) { | |
func updatePoolConfig[T comparable](setter func(T), current *T, conf config.ValueLoader[T]) { |
[Optional]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to modify
rudder-server/jobsdb/jobsdb.go
Line 1094 in 209a9c1
func (jd *Handle) Close() { |
So that db closes the connection only if the connection was initialised from jobsdb.
@@ -769,7 +770,9 @@ func (jd *Handle) init() { | |||
jd.loadConfig() | |||
|
|||
// Initialize dbHandle if not already set | |||
if jd.dbHandle == nil { | |||
if jd.dbHandle != nil { | |||
jd.sharedConnectionPool = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also do:
jd.sharedConnectionPool = true | |
jd.onCloseFn = func () { | |
if err := jd.dbHandle.Close(); err != nil { | |
jd.logger.Errorw("error closing db connection", "error", err) | |
} | |
} |
Description
One connection pool for all jobsdb.
In future, use the same pool across other areas(reporting, backend-config cache, drain-config etc. with appropriate increase in pool configurations)
Motivation
There's several problems to deal with
By default each jobsdb connection pool could only hold two idle connections, this meant creating a lot of new connections often
It was also observed that the overall
in_use
connections at any point of time was only around 20-25(this includes reporting, config cache etc.), there was rarely a need for more connections to the database. Also observed was the fact that having too many idle connections lying around caused unnecessary memory usage on postgres pods. A better control over the idle and total connections in use would be great.So considering these factors, it made sense to have a common connection pool and have different components borrow from it.
To allow for a surge in connection demand, the max open connections config is at a generous distance, which is configurable of course in case 40(default) isn't enough.
Configuration
Using given config(all of these are hot-reloadable):
maxOpenConnections
: 40maxIdleConnections
: 5maxIdleConnIdleTime
: 15 MinutesmaxConnLifetime
: 0Linear Ticket
Resolves PIPE-1602
Security