-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SQLite] Discussion: how to handle duplicate column names in result set? #696
Conversation
I also tried this test case, but didn't get far: #691 |
Ah, apologies @rozenmd I missed that you'd already pushed something to demonstrate the issue. Another option that just occurred to me is making But then it's probably no good if D1 introduces its own column-collision-renaming behaviour. It'd be better to do it in workerd so it's standard across all users of the API. |
IMO, it's fine to say to the app: "If you're going to request rows as objects, it's your responsibility to make sure the column names don't collide by using But this doesn't work for D1 specifically because D1 doesn't know at the time of the query whether the application is requesting objects vs. raw arrays, right? So I think a |
I actually think maybe the DO API should throw an exception if rows are requested in object form and it turns out there are duplicate column names. If the app doesn't realize it has duplicate column names, renaming one of them isn't going to make the code work correctly. If the app does know, then it can use |
Agree with the idea of a columnNames property to minimize redundant bytes.
Adds up fast.
re: duplicates - a contextual error would be great. The DO API can return
something like `DUPLICATE_COLUMN_NAMES` as a short form error.
The D1 API can expand that into (for example):
“Error: DUPLICATE_COLUMN_NAMES: Duplicate column names in result set. This
typically occurs when joining multiple tables with overlapping column
names. Use `AS` to provide a unique alias for column names.”
…On Wed, May 24, 2023 at 16:28 Kenton Varda ***@***.***> wrote:
I actually think maybe the DO API should throw an exception if rows are
requested in object form and it turns out there are duplicate column names.
If the app doesn't realize it has duplicate column names, renaming one of
them isn't going to make the code work correctly. If the app does know,
then it can use AS. So adding a _1 suffix or whatever doesn't seem
helpful.
—
Reply to this email directly, view it on GitHub
<#696 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAEQ4E5OUDMGUOXFKNXLNLXHZVO5ANCNFSM6AAAAAAYM3LZ34>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
9fade21
to
93664af
Compare
FYI, this will be fixed D1 once #1586 is merged and rolls out |
This is now live. |
If you have the following SQL:
You get (in real sqlite in 'table' output mode):
(Note the repeated column name
c
)Doing that in workerd's SQLite:
Note that the second value of
c
,7
, has overwritten the first value3
.Using
.raw()
gets the right data:But currently, the D1 shim assumes
.raw()
can always be retrieved from theObject.values
of the "object" response type. We'll be changing the shim but, for backwards compatibility, should we endeavour to make the default response format never drop data, by preferring to instead mangle the column names?Note the
c_1
key meaning "first duplicate of c" column. We could also use.
as a separator, or even~
if we wanted to channel DOS filenames... :)Alternatively, we could throw an exception if this case occurs, though that might be painful for people who do something like:
SELECT * FROM users, projects WHERE projects.user_id = users.user_id
. There's a duplicate column name but the data is always the same, so is that really bad? The fact that theraw()
and normal responses have different lengths of responses is still not great though...As an aside, it's a shame the full_column_names pragma is deprecated (and has no effect). It would have solved this nicely for anyone who relied on
select *
a bunch in their app...