-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let's figure out logging #150
Comments
I'm going to pull this off the 1.0 milestone. I don't see any way for this to happen in the immediate future. The ecosystem isn't there yet. From my point of view there are two main requirements here:
The two options today seem to be I'll be interested to see what happens with rwf2/Rocket#21, as I assume they have similar concerns to mine. |
Also, if
|
@sgrif would it be possible to leave logging up to the user by providing hooks on the connection? I'm thinking something like this: trait Connection {
...
/// Register a function that will be called before each executed statement.
fn pre_execute<F>(&mut self, hook: F)
where F: Fn(query: &str, args: &[?]) -> bool;
fn on_success<F>(&mut self, hook: F) where ..;
fn on_error<F>(&mut self, hook: F) where ..;
} This would allow customized logging, only logging errors, profiling by measuring execution time, ... |
Profiling execution time is definitely an interesting case that I haven't considered much. I need to give that one some thought. (The general answer to your question is, yes having us provide our own ad-hoc logging system with shims for log and slog is probably the most likely path forward at this point) |
I've played a bit with making a custom #[derive(Debug, Clone, Copy)]
pub enum TransactionState {
Start,
Commit,
Abort,
}
pub trait Logger<DB: Backend>: Send + Default {
fn on_establish(&self, url: &str);
fn on_transaction(&self, state: TransactionState);
fn on_execute(&self, query: &str);
fn on_query<T>(&self, source: &T)
where T: QueryFragment<DB>,
DB::QueryBuilder: Default;
}
#[derive(Debug)]
pub struct LogConnection<C, L> {
inner: C,
logger: L,
}
impl<C, L> SimpleConnection for LogConnection<C, L>
where C: Connection,
L: Logger<C::Backend>
{
fn batch_execute(&self, query: &str) -> QueryResult<()> {
self.logger.on_execute(query);
self.inner.batch_execute(query)
}
}
impl<C, L> Connection for LogConnection<C, L>
where C: Connection,
L: Logger<C::Backend>,
<C::Backend as Backend>::QueryBuilder: Default
{
type Backend = C::Backend;
type TransactionManager = LogTransactionManager<C::TransactionManager>;
fn establish(database_url: &str) -> ConnectionResult<Self> {
let logger = L::default();
logger.on_establish(database_url);
C::establish(database_url).map(|inner| LogConnection { inner, logger })
}
fn execute(&self, query: &str) -> QueryResult<usize> {
self.logger.on_execute(query);
self.inner.execute(query)
}
fn query_by_index<T, U>(&self, source: T) -> QueryResult<Vec<U>>
where T: AsQuery,
T::Query: QueryFragment<Self::Backend> + QueryId,
Self::Backend: HasSqlType<T::SqlType>,
U: Queryable<T::SqlType, Self::Backend>
{
self.logger.on_query(&source.clone().as_query());
self.inner.query_by_index(source)
}
fn query_by_name<T, U>(&self, source: &T) -> QueryResult<Vec<U>>
where T: QueryFragment<Self::Backend> + QueryId,
U: QueryableByName<Self::Backend>
{
self.logger.on_query(source);
self.inner.query_by_name(source)
}
fn execute_returning_count<T>(&self, source: &T) -> QueryResult<usize>
where T: QueryFragment<Self::Backend> + QueryId
{
self.logger.on_query(source);
self.inner.execute_returning_count(source)
}
fn transaction_manager(&self) -> &Self::TransactionManager {
// See the implementation of `std::path::Path::new`
unsafe {
&*(self.inner.transaction_manager() as *const C::TransactionManager as
*const LogTransactionManager<C::TransactionManager>)
}
}
}
#[derive(Debug)]
pub struct LogTransactionManager<T> {
inner: T,
}
impl<C, L> TransactionManager<LogConnection<C, L>> for LogTransactionManager<C::TransactionManager>
where C: Connection,
L: Logger<C::Backend>,
<C::Backend as Backend>::QueryBuilder: Default
{
fn begin_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Start);
self.inner.begin_transaction(&conn.inner)
}
fn rollback_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Abort);
self.inner.rollback_transaction(&conn.inner)
}
fn commit_transaction(&self, conn: &LogConnection<C, L>) -> QueryResult<()> {
conn.logger.on_transaction(TransactionState::Commit);
self.inner.commit_transaction(&conn.inner)
}
fn get_transaction_depth(&self) -> u32 {
self.inner.get_transaction_depth()
}
} This could nearly be implemented outside of diesel.
|
The main problem with that implementation is that I made the mistake of encouraging code written as |
I also doubt this could ever be (efficiently) implemented outside of Diesel, since any outside implementation would have to force the query builder to run when we would normally skip it because it's in the prepared statement cache and looked up by |
That's a bit unfortunate, but is it really required that logging works out of box for existing implementation?
In the interface that I've proposed above there is a |
@sgrif enabling
I.e. the postgres logs do not actually show you the values, so it's not a good replacement for hunting down bugs. |
@diesel-rs/core I would like to propose that we discuss this issue again in the context of diesel 2.0.
Couldn't this be simply "solved" by doing |
@weiznich If your crate doesn't ever pass a connection to a dependency... Maybe? I don't think it's a viable option at this point even beyond the "People take |
As @theduke mentioned, it would be nice if diesel provided callbacks (like active record) for other crates to hook into: trait Connection {
fn before_commit(&mut self, hook: impl Fn(...));
fn after_commit(&mut self, hook: impl Fn(...));
} Then you could have diesel "plugins" for logging, profiling, validations, etc. Is this something that you would consider? |
@ibraheemdev You just left of the interesting part of the function signature. How exactly would those hook functions look like, so that we don't have a separate one for each function in the |
Callsbacks don't generally work great with Rust, IMO. Typically in cases like this I would allow user to register a handler object implementing a certain trait: trait Logger {
fn before_event(&mut self, data: &LoggingEvent);
fn after_event(&mut self, data: &LoggingEvent);
} and then this object would get called with #[non_exhaustive]
enum LoggingEvent {
Connnection { ... }
}| or something like that. Just my 2c. |
@ibraheemdev @dpc That seems like a possible solution. If someone is actually interested in implementing this, please open a new topic at the discussion forum, so that we can figure out the details of the API there. I see a few open questions that need to be addressed, like:
|
I started the discussion over here: #3076 |
Fixed by #3864 |
As I'm adding SQLite support, I'm finding it annoying to track down cases like
called `Result::unwrap()` on an `Err` value: DatabaseError("SQL logic error or missing database")
.This should definitely be injectable, and I'd prefer that it be decoupled from the individual connection classes, but I'm not sure that's actually possible.
I'd originally imagined something like
LoggingConnection
being a struct, which wrapped anotherT: Connection
, and ran everything through theDebugQueryBuilder
(and eventually cached it through the same mechanisms that we add for prepared statement caching). This should also log the values for the bind params. I am fine with adding the constraintToSql<ST, DB>: Debug
to accommodate this, as basically everything should implement it.However, given that
DebugQueryBuilder
can have output which potentially differs from the actual SQL executed (for example, we're currently figuring out how to deal with SQLite's lack of aDEFAULT
keyword, and I don't see how what we end up doing can actually be reflected inDebugQueryBuilder
). So perhaps the correct answer is to have aset_logger
method added toConnection
.Ultimately I'm undecided on this, and am open to either solution, if presented as a working PR. Discussion about how to go about this, or use cases that we think we need to support is certainly welcomed.
The text was updated successfully, but these errors were encountered: