-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider improvements for passing values to queries #463
Comments
Or, a new trait. I also definitely agree that we should use prepared statement metadata if available - additional validation is one of the reasons for its existence in the protocol. We should probably make the option opt-in or opt-out, to allow performance-sensitive workloads to skip additional runtime checks. |
I sat and pondered on this task for a little while, and tried to specify it a bit more and split into subtasks. I see it like this: Getting rid of the For unprepared statements, we don't have a way to determine column names and types of the bind markers. Obtaining this info requires parsing the statement and knowing the current schema - which we could theoretically do on the driver, but it's a huge effort and I'm not sure cannot be done without introducing races. In the case of prepared statements, this information is available because it is computed by the DB and returned in the response to the prepare request. I see two ways out:
Add new serialization traits Let's use the convention introduced in the not-yet-done deserialization refactor and let's name the new traits pub trait SerializeRow {
fn serialize(&self, ctx: &RowSerializationContext<'_>, out: &mut impl std::io::Write) -> Result<(), std::io::Error>;
}
pub trait SerializeCql {
fn serialize(&self, typ: &CqlType, out: &mut impl std::io::Write) -> Result<(), std::io::Error>;
} The As this change will affect all Current serialization API is heavily based on
It will be necessary to update some examples and documentation, but I don't expect it to be much work compared to the deserialization refactor. Macros for the new traits The new @cvybhu @havaker @wprzytula @Lorak-mmk thoughts? If there are no objections I will create sub-tasks. If we go with way 2. in "Getting rid of the |
This is a great idea, if only we settle on implementing 1st solution. I am also for the 2nd, though.
You most probably meant My thoughts are mostly positive. Let's do it! |
One possible problem with 2nd solution: it may encourage people to perform textual parameter substitution as they can't use query parameters - leding to security vulnerabilities (noSQL injection) |
Hmm, right. Maybe the 1st option is better after all. Another quite convincing argument in its favor that I can see right now is that it will just break less of the existing code. |
I've created the sub-tasks. If there are some subtask-specific issues to discuss, then let's do it on the subtasks. |
@piodul Can we close this now, or is there more to implement? |
To be fair, I think we've reduced the scope of the issue a little while we were working on 0.11 release. We did all that was necessary to ensure type safety, but the issue actually starts by describing a different problem and improving type-safety was a side effect: namely, it suggests making it possible to do things like pass integers of one size into the database column of other size and let the driver handle down/upcasting in a safe way. We already have #409, so I think we can reuse that issue for the sake of the discussion whether we should implement support for such casts at all (personally I'm not a big fan of the idea). I'll close this issue. |
There have been some ideas and discussions about improving the way values are passed to queries.
The purpose of this issue is to gather all of them in one place and decide what to do.
Unsigned values
As described in #409, our driver doesn't support passing unsigned integer values like
u64
in a query.This could be solved by implementing the
Value
trait for them, but we aren't sure if that's the best way to go about it.When given an
u64
value the driver should probably cast it toi64
and send it to the database, but it isn't exactly clear what should happen if theu64
value doesn't fit ini64
.u64
thinking nothing can go wrong.BigDecimal
(as one of users on Slack suggested) but that's way less intuitive.Another problem is that this makes our API even less strongly typed - someone might pass an
u64
by mistake and think that the compiler wouldn't allow anything like that.New interface for
Value
?It might be a good idea to extend the interface of
Value
trait.Currently it looks like this:
We could add a field to specify the target column type:
This way
serialize
would know what kind of column it's serializing for.Given a big
u64
value it could decide to output eitheri64
orBigDecimal
depending on the target column type.Additionally this would help avoid mistakes with passing incorrectly sized values, e.g passing
i32
instead ofi64
.Should user pass column types along with values?
To make the new interface possible we would have to get the type of columns from somewhere.
One way would be to just force the user to specify types along with the values. That would make the API even more strongly typed, but harder and more verbose to use.
Another way would be to use metadata returned from the database after preparing a statement.
PreparedMetadata
seems to contain the information we need. For unprepared queries we could just prepare them under the hood, they aren't performance sensitive anyway.AFAIR I think python driver does some local type checking, maybe they have figured out how to do this already and we could replicate their solution.
Tasks
Session::execute
#800The text was updated successfully, but these errors were encountered: