-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the "skippableUpdate" optimization during bulk import #2285
Comments
Signed-off-by: Paul Bastide <pbastide@us.ibm.com>
Use the "skippableUpdate" optimization during bulk import #2285
This does not seem to be quite right. I set the fhirServer/bulkdata/core/enableSkippableUpdates to false and it did not import the data, I set it to true and it imported the data.
With set to true:
|
Also note I did a second import with the value set to True right after the first import with the value set to True, and this was the output:
was wondering if the total skip should still be 0 |
I had flipped the logic in a downstream, and fixed the logic in current azure provider |
Moving this back to in progress, as importing a patient that was deleted was working earlier.
|
This is a conflict with the big merge. It was working as I merged it, and the behavior was changed. |
Moving back to in progress due to issue with part of the import needing and update and other parts to be able to skip. |
- Added an Integration Test for re-import which generated an NPE - Add test-import-skip.ndjson - Added an Update to include the fhirpersistenceevent object with the previous resource - Update copy-server-config.sh Signed-off-by: Paul Bastide <pbastide@us.ibm.com>
PR #2585 |
Finished testing this issue, now works with import of deleted resource, and when importing several resoureces, and having some of them be changed, or deleted. it works fine. Closing issue |
Is your feature request related to a problem? Please describe.
In #2263 we added support for a server-side update optimization.
This feature request is to start using that optimization when we process a bulk import.
My theory is that this will be much faster for "repeated import" scenarios.
Describe the solution you'd like
We should use the server optimization by default.
Optionally, we might want to introduce a config parameter to opt out of it.
Describe alternatives you've considered
Acceptance Criteria
1.
GIVEN an existing resource in the server
WHEN we get an import request that contains a copy of that same resource (same type, id, and contents)
THEN avoid performing the update when the contents match
Additional context
The text was updated successfully, but these errors were encountered: