Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/deep insert request #2653

Merged
merged 17 commits into from
May 30, 2023

Conversation

KenitoInc
Copy link
Contributor

Issues

This pull request fixes #xxx.

Description

Briefly describe the changes of this pull request.

Checklist (Uncheck if it is not completed)

  • Test cases added
  • Build and test with one-click build and test script passed

Additional work necessary

If documentation update is needed, please add "Docs Needed" label to the issue and provide details about the required document change in the issue.

src/Microsoft.OData.Client/DeepInsertSaveResult.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DeepInsertSaveResult.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DeepInsertSaveResult.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/Serialization/Serializer.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/Serialization/Serializer.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/Serialization/Serializer.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/Serialization/Serializer.cs Outdated Show resolved Hide resolved
@KenitoInc KenitoInc force-pushed the feature/DeepInsert-Request branch from 333e504 to a5481ea Compare May 17, 2023 10:11
@habbes
Copy link
Contributor

habbes commented May 23, 2023

My concern with this feature is that it goes out of the flow of how users normally interact with the library to update objects (SaveChanges). The use of DeepInsert comes with additional restrictions that requires to keep track of the entities they're linking to and ensure they have not been modified. I think these two points negatively impact the developer experience to the point that I think most users would prefer to stick to SaveChanges API if they have the choice ($batch requests are more likely to be supported by the server anyway).

This probably beyond the scope of this PR, but I think (as @g2mula had suggested) we should think of a way to unify these two APIs such that the library figures out which API call to make without burdening the user with that detail.

Here's a high-level view of how we could achieve that:

  • Instead of exposing different methods for DeepInsert or DeepUpdate, we add new flag to SaveChangesOptions e.g. PreferDeepInsert or AllowDeepInsert or something along those lines
  • The user will add objects and links as they do normally, using existing APIs like AddObject, AddReleatedObject, DataServiceCollection, etc.
  • To sync with the server they call SaveChangesAsync(SaveChangeOptions.PreferDeepInsert)
  • When that flag is set, the client will try to perform a bulk insert (or bulk update depending if enabled) if it deems it possible, otherwise it defaults to making a batch call or multiple requests to the server.
  • How does the client determine whether a bulk update is possible? A naive way is to visit all the changed entries, if they the form a single connected graph, then we know they are all interconnected and can be combined in a single bulk call. In that case it finds a possible root object and makes a bulk update with that as the root. It can use states of filtered objects to decide whether to do an insert or update.
  • If the objects do not meet condition (one connected graph), then it falls back to what SaveChangesAsync normally does
  • This algo could be refined to support multiple root objects of a single bulk update call or multiple update calls if we have multiple disjoint graphs.

@ElizabethOkerio
Copy link
Contributor

@habbes I think leaving the library to determine the root object might be tricky because we can have several objects that have connections but they are not the specific resource type that the client wants to deep-insert. Also deep-insert API will be of a particular type for example. This can lead to several deep insert requests being created for endpoints that do not even exist. I see this approach to be error-prone.

@gathogojr gathogojr dismissed their stale review May 25, 2023 07:51

Unblock pull request

@habbes
Copy link
Contributor

habbes commented May 29, 2023

@habbes I think leaving the library to determine the root object might be tricky because we can have several objects that have connections but they are not the specific resource type that the client wants to deep-insert. Also deep-insert API will be of a particular type for example. This can lead to several deep insert requests being created for endpoints that do not even exist. I see this approach to be error-prone.

@ElizabethOkerio
Currently, when you call SaveChangesAsync, you don't select which objects get sent to the server and which ones do not. So the argument you make about there being resources that may be saved other than the ones the customer wants to deep insert can also be made for the way the client currently works even without deep insert. Even now, it's possible for the client to make POST and PATCH requests for endpoints which do not exist on the server.

In my suggestion (at least for a conceptual first iteration), I propose for deep insert to be used only if the customer enables the flag and all the changed objects are in the same subgraph such that they could be fulfilled by a single deep insert/update. There would still be a challenge to determine what is/are the root objects (esp. if there are cycles). But we would transparently fall back to the a batch or multi-request approach if these conditions are not met.

src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DeepInsertSaveResult.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DeepInsertSaveResult.cs Outdated Show resolved Hide resolved
habbes
habbes previously approved these changes May 30, 2023
Copy link
Contributor

@habbes habbes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved since the feature is still internal. Looking forward to seeing the e2e tests.

src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
src/Microsoft.OData.Client/DataServiceContext.cs Outdated Show resolved Hide resolved
@pull-request-quantifier-deprecated

This PR has 1133 quantified lines of changes. In general, a change size of upto 200 lines is ideal for the best PR experience!


Quantification details

Label      : Extra Large
Size       : +829 -304
Percentile : 100%

Total files changed: 17

Change summary by file extension:
.cs : +801 -303
.txt : +19 -1
.csproj : +1 -0
.bsl : +8 -0

Change counts above are quantified counts, based on the PullRequestQuantifier customizations.

Why proper sizing of changes matters

Optimal pull request sizes drive a better predictable PR flow as they strike a
balance between between PR complexity and PR review overhead. PRs within the
optimal size (typical small, or medium sized PRs) mean:

  • Fast and predictable releases to production:
    • Optimal size changes are more likely to be reviewed faster with fewer
      iterations.
    • Similarity in low PR complexity drives similar review times.
  • Review quality is likely higher as complexity is lower:
    • Bugs are more likely to be detected.
    • Code inconsistencies are more likely to be detected.
  • Knowledge sharing is improved within the participants:
    • Small portions can be assimilated better.
  • Better engineering practices are exercised:
    • Solving big problems by dividing them in well contained, smaller problems.
    • Exercising separation of concerns within the code changes.

What can I do to optimize my changes

  • Use the PullRequestQuantifier to quantify your PR accurately
    • Create a context profile for your repo using the context generator
    • Exclude files that are not necessary to be reviewed or do not increase the review complexity. Example: Autogenerated code, docs, project IDE setting files, binaries, etc. Check out the Excluded section from your prquantifier.yaml context profile.
    • Understand your typical change complexity, drive towards the desired complexity by adjusting the label mapping in your prquantifier.yaml context profile.
    • Only use the labels that matter to you, see context specification to customize your prquantifier.yaml context profile.
  • Change your engineering behaviors
    • For PRs that fall outside of the desired spectrum, review the details and check if:
      • Your PR could be split in smaller, self-contained PRs instead
      • Your PR only solves one particular issue. (For example, don't refactor and code new features in the same PR).

How to interpret the change counts in git diff output

  • One line was added: +1 -0
  • One line was deleted: +0 -1
  • One line was modified: +1 -1 (git diff doesn't know about modified, it will
    interpret that line like one addition plus one deletion)
  • Change percentiles: Change characteristics (addition, deletion, modification)
    of this PR in relation to all other PRs within the repository.


Was this comment helpful? 👍  :ok_hand:  :thumbsdown: (Email)
Customize PullRequestQuantifier for this repository.

@KenitoInc KenitoInc merged commit 3133a34 into OData:master May 30, 2023
@leoerlandsson
Copy link

Hi,

Is there any info when the Deep Insert functionality will make it into a Release?

Thanks!

Br,
Leo

@KenitoInc KenitoInc mentioned this pull request Sep 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants