-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What if non-ResultSet SELECT execute() returned an error when maxRows was too small? #146
Comments
1 would be my first choice, although 2 would be nice if not too much work. Now that you have result sets (nice!) proper change would be to use them. Regards, 2015-07-22 8:59 GMT+02:00 Christopher Jones notifications@github.com:
Met vriendelijke groeten, Don't walk behind me, I may not lead. |
We'll be doing most (if not all) of our reading with result sets so this won't impact us. Reading without result sets makes sense when you have very small tables or when you do paging. In both cases it seems important to guarantee that what you get is what you asked for (if you ask for a page of 200 you should get a page of 200, not 100 because of a global default). If this is not the case, it should not happen silently. So I would go for option 4. Don't make the API more complex, just make it safe with fail-fast. |
We often have the case that we show a user just the first 10..80 rows of 2015-07-22 11:13 GMT+02:00 Bruno Jouhier notifications@github.com:
Met vriendelijke groeten, Don't walk behind me, I may not lead. |
@rinie If you only want the first 80 rows you should limit your query with a |
Maybe proposal 3 is a good compromise. |
@bruno. In part of our system the user can enter a query. I do not want to Coding the number of rows in a query is not my idea of appropriate SQL. I Then again, from now on all our queries wil use result sets. Changing the 2015-07-22 11:40 GMT+02:00 Bruno Jouhier notifications@github.com:
Met vriendelijke groeten, Don't walk behind me, I may not lead. |
I like the option 2 for convenience, but I strongly suggest everyon to use ResultSets. |
@rinie Option 3 could be made backwards compatible. For us it is not a problem to limit with SQL because our SQL statements are generated from metadata, not hardcoded. |
3 |
Option 2 is my preference. |
Thanks for the comments so far. Keep 'em coming. (And let us know how you like Result Sets & prefetching) |
While resultsets is the way to go, I'd support option 2 for those unusual cases. I also don't think it makes the driver overly complex...and I'd rather prefer to handle a situation through callbacks than error handling. Nice, guys! Resultsets and prefetching are cool features, especially for pagination. |
if no results set, option 2 seems to give the best output you could ask. naturally best choice is to give some sort of behavior option parameter in the execute function call or/and on the getConnection function (with default to option 2). |
Option 2 would be like: { rows: [ . . . ],
somenewflag: true // or false
resultSet: undefined,
outBinds: undefined,
rowsAffected: undefined,
metaData: [ . . .] } If the flag indicates that the fetch was incomplete, the app would have to reexecute it with a bigger maxRows and get all the data again; it wouldn't be possible just to get the next set of results. That's what Result Sets are for. Now, what should the flag be called? rowsTruncated? |
@cjbj I like option 2, I definitely think an attribute that let's us know our data has been truncated should be there. The name rowsTruncated would work fine. I also think option 3 should be added as well. It's already a pain to have to check for errors after every IO operation in Node.js. I wouldn't want to have to check for errors and if rowsTruncated was true for every execute. I'd like to be able to set an option on the base driver class (for defaults) and executes (for overrides), maybe called errorWhenRowsTruncated (terrible name, I know), that turns this condition into an error for me. |
+1 @cjbj |
Hi Chris! We vote for option 2. all my best, On Sat, Jul 25, 2015 at 9:25 AM, Nelreina notifications@github.com wrote:
|
Looks like option 2 is the favourite. @bjouhier has some good arguments for option 4, though. |
@cjbj There seems to be a consensus around option 2. People (like me) who want option 4 can implement it with a light wrapper around the option 2 API. |
I am interested, but disagree. In my opinion Nodejs (and TCP/IP, file IO Your example feels like doing file I/O mmap through Rinie 2015-08-07 15:42 GMT+02:00 Bill Christo notifications@github.com:
Met vriendelijke groeten, Don't walk behind me, I may not lead. |
Disagree too. See my reply in the other issue. |
@bchr02 I think you got some point here... I mean... All other available oracle drivers works with this option... why not this one? |
We should follow this up in Issue #158 |
Will you be impacted if there is a change in SELECT execute() behavior
for non-Result Sets when the number of query rows is larger than maxRows
https://github.com/oracle/node-oracledb/blob/master/doc/api.md#propdbmaxrows ?
@dmcghan had an example of a 98 row table where a (non-Result Set) query
with maxRows of 100 currently returns all rows. If the table size
grows to 102, the application will return 100 rows and continue
happily. The application won't know there are 'missing' rows.
Now that node-oracledb has Result Sets it is easier to use them for queries
where the number of rows isn't known in advance, so you don't need to
use huge, speculative maxRows values. And now we have pre-fetching,
then detecting an incomplete fetch can be done efficiently.
The options are
This discussion is only about non-Result Set queries.
Let us know what you think. Is the change beneficial? What code do
you have that depends on the current behavior? How complex do you
want the driver to be? Is the current behavior a feature or a risk?
(Tagging a few random contributors so they see this: @Bigous @bchr02 @rinie @hvrauhal @bjouhier @sagiegurari @tmanolat @nelreina @hexkode @hellboy81 @SorinTO @pmarino90 @jaw187) Don't feel bad if your handle isn't tagged here, I just ran out of patience digging through names.
The text was updated successfully, but these errors were encountered: