Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate records for the a single saveInBackground Call because Internet provider block response #4006

Closed
oazya500 opened this issue Jul 8, 2017 · 47 comments

Comments

@oazya500
Copy link

oazya500 commented Jul 8, 2017

Internet provider some time block response to device and device request to save object again because the response is error all off request success and save the object but device receive error response because internet provider block the site in response to device and show this web to device

http://82.114.160.94/webadmin/deny/

some time response success and device stop to request again but it save object some time to 4 times

it is big Issues

We use GitHub Issues for bugs.

If you have a non-bug question, ask on Stack Overflow or Server Fault:

You may also search through existing issues before opening a new one: https://github.com/ParsePlatform/Parse-Server/issues?utf8=%E2%9C%93&q=is%3Aissue

--- Please use this template. If you don't use this template, your issue may be closed without comment. ---

Issue Description

Describe your issue in as much detail as possible.

Steps to reproduce

Please include a detailed list of steps that reproduce the issue. Include curl commands when applicable.

Expected Results

What you expected to happen.

Actual Outcome

What is happening instead.

Environment Setup

  • Server

    • parse-server version (Be specific! Don't say 'latest'.) : [FILL THIS OUT]
    • Operating System: [FILL THIS OUT]
    • Hardware: [FILL THIS OUT]
    • Localhost or remote server? (AWS, Heroku, Azure, Digital Ocean, etc): [FILL THIS OUT]
  • Database

    • MongoDB version: [FILL THIS OUT]
    • Storage engine: [FILL THIS OUT]
    • Hardware: [FILL THIS OUT]
    • Localhost or remote server? (AWS, mLab, ObjectRocket, Digital Ocean, etc): [FILL THIS OUT]

Logs/Trace

Include all relevant logs. You can turn on additional logging by configuring VERBOSE=1 in your environment.

@ioscargalvan
Copy link

I am too experiencing this issue. Do you have a workaround on this ?

@oazya500
Copy link
Author

we want parse server team solve this issue .

you can workaround this issue by create randome number when create object and query that number before save in cloud and response that object if it is already saved
to sure it is uniq number you can take another value like datacreate or any other value

it is big Issues for me because always has duplicated values

@ioscargalvan
Copy link

This is a ground breaking issue for me. I'm changing from parse to firecloud, let's see how it goes.

@flovilmart
Copy link
Contributor

You’l’ have The same issue anywhere you try to save some data on a server and when no confirmation that the server saved the data actually comes to the client. Abd if you’re better off with firecloud, good for you!

@ioscargalvan
Copy link

Yeah, you're right. But this issue has been becoming more and more present. Don't get me wrong, is there anyway I can assist you? Because this issue (seems to me) is new. I have been using parse server for a few months and it never ever showed up before.

@flovilmart
Copy link
Contributor

Does it show up for you? And if it shows up, we need a bit more than sometimes, on peculiar network conditions etc....

@ioscargalvan
Copy link

Ok, I'll try to go as deep as possible.

In the last week and a half, it happened to me 4 times. I have been experiencing issues with my internet connection, and whenever I was having issues with my connection and called the method saveInBackground(), LogCat kept telling me a proxy status of NODATA (which, I assume means that Android couldn't connect to internet). Then, after the connection was successful, Parse notified that the object was saved successfully (and this is where the data is duplicated). I think is relevant to say that the notification came only once.

If I can be of any more help, just ask :).

@oazya500
Copy link
Author

I do not know how network is work for blocking site . I remember I was test my app for some android version and one off the version always fail . I try more and more and take some time to know the site is blocked for that version . but all other version it is work fine . when I ran vpn app for that version who is always fail . the app works fine . I did not know why it is blocked for that version only . my be Is random function to block some sites

@acinader
Copy link
Contributor

acinader commented Jul 11, 2017

This has definitely been an issue for us. We have solved it now, but it took some time.

  1. our first pass was: when we save objects, create a unique id for the object on the client. A beforeSave hook would do a query for the unique id, if it found a dupe, it would not save the object.

While it did work sometimes, we found that we would still occasionally get duplicate objects when a client had a bad internet connection.

  1. So, we created an afterSave hook that would query for all objects saved in the last ten min and mark any that had the same unique id as a dupe. this solution has proven to be robust.

There is probably a better general solution to our afterSave hook, like a periodic timer that checks each class for dupes, or something, but it would require our client SDK's to make a GUID for each object before sending it to the server. Something kinda like what you get from an object persistent service in java like hibernate.

I think that a general solution baked into the SDK's (and ideally documented with a solution for the REST API), would be a valuable addition to parse.

Thoughts and/or questions most welcome!

@oazya500
Copy link
Author

some pic for blocking site if the site can not work from out side off my country
screen shot 2017-07-12 at 1 28 20 am
screen shot 2017-07-12 at 1 28 07 am

@oazya500
Copy link
Author

I think the best way to solve this it is make device before save object is pending Id from server when the id come to device for first time start to save all object . when the second id came . check the object if it is has id do not change id or save the object again .

this also will work even if you have enable local data storage . this is simple solution but I will know if the team work to solve this problem It will Offers best off my solution

@montymxb
Copy link
Contributor

@acinader I do like the idea of the clients setting a unique id before submitting it to the server. Same object coming in more than once could be screened out by the hooks, the server itself, or even another sdk.

@oazya500
Copy link
Author

oazya500 commented Jul 11, 2017

@montymxb but if you enable local data store you will not know Which of the duplicate object must be remove

also you will not to sure if the id is uniq even if you query the last 10 mint . but you can say it is uniq id if you query last 10 mint

@montymxb
Copy link
Contributor

montymxb commented Jul 11, 2017

@oazya500 You would have to respect the uuid in a local data store as well, so ideally you wouldn't have duplicates there either. This is up to the implementation to guard against.

As for uniqueness, on the client end, you would have to check the server and anything else (like a local datastore) to ensure it's not duplicating an existing entry. Essentially, ACID compliance, with individual requests completing (and being unique) or being rejected outright.

@flovilmart
Copy link
Contributor

That would work in theory, in practice this is moving the ID generation on the client and hope there’s no conflicts :/

@montymxb
Copy link
Contributor

It's ideal in theory, but tough in application :/ , not impossible however.

@flovilmart
Copy link
Contributor

Not really, but that means the server is just doing upserts which can be costly

@montymxb
Copy link
Contributor

Well it helps to guarantee the validity of a transaction, but it is exceptionally expensive. Too expensive for anything more than small apps probably.

@flovilmart
Copy link
Contributor

And i'm not sure we'll need it in that case, @acinader you mentioned you had the issue, perhaps the exponential backoff in the apps is not good enough, perhaps also, the connection get closed without a timeout, so we could infer that the save actually happened.

@oazya500
Copy link
Author

@flovilmart in my case at firset and second i get time out and object saved
Always new request start after 10 second from previs request . At the 3d I get bad response .

We Agreed object must have id befor save but that id must be gnrated at the server to ensure there is no conflicts with other id

@flovilmart
Copy link
Contributor

@oazya500

We Agreed object must have id befor save but that id must be gnrated at the server to ensure there is no conflicts with other id

This is already how it's done, the client has no idea that the save failed from a network error, perhaps we should cancel the save / abort when the request is cancelled server side instead, not sure how to handle that.

Always new request start after 10 second from previs request

Can you provide network traces, stack traces, call traces? What SDK are you using?

I'm all for improving the situation, at the moment, if the response fails, we don't destroy the created object. Even if we could, that has large implications and I'M not sure any systems supports that correctly, moreover, your average MEAN stack app would not support that.

Thoughts @acinader @montymxb ?

@montymxb
Copy link
Contributor

montymxb commented Jul 12, 2017

@oazya500 For starters what sdk(s) are involved would be helpful, that would narrow this down to how you set anything up on the client. As for a filtering solution I do like the beforeSave/afterSave cloud code hooks @acinader brought up.

For generating a 100% unique ID you can't really get around not querying the server before-hand. However, and this is just a thought, you could use the current timestamp in combination with a randomly generated id, something like timestamp + uuid. This way even if you did get a colliding id the natural incrementing of the timestamp would require two unique ids to be generated at the same second for a collision to occur. The sdks wouldn't have to poll the server to do this as well, saving you a good chunk of time in that regard.

Even though a duplicate is not too likely, it may still happen from time to time. Cloud code hooks or another sdk could run as often as needed to account for possible collisions, whether that's each request or every 10 minutes or so, whatever time frame works.

In the event of a collision you would have to have some sort of a conflict resolution you follow through on (assuming you didn't catch it before the duplicate saved). A simple resolution mechanism would be to just toss the newer object(s) and keep the older/oldest one. That's a bit overly simplistic, but from there you should be able to put together a mechanism that works best for you.

::EDIT::

If you want to make your uuid even more unique it would be helpful to add in additional information, such as the current user's id/username or something else that wouldn't be shared with other accounts. This would further reduce the likelihood of a uuid collision to an object being saved in the same session/by the same user. Anything else to reduce a collision can be added as needed.

@oazya500
Copy link
Author

@flovilmart

This is already how it's done, the client has no idea that the save failed from a network error, perhaps we should cancel the save / abort when the request is cancelled server side instead, not sure how to handle that.

make clinet befor save object request id from server without save any object just request uniq id .
up to id request susses then start save all object

@oazya500
Copy link
Author

@flovilmart

Can you provide network traces, stack traces, call traces? What SDK are you using?

I will provide when I at the network make that issue . now this network not make this error

screen shot 2017-07-12 at 4 54 26 pm
screen shot 2017-07-12 at 4 54 20 pm

@flovilmart
Copy link
Contributor

You should probably investigate that SSL exception, because it’s not coming from parse-server.

@oazya500
Copy link
Author

@flovilmart

You should probably investigate that SSL exception, because it’s not coming from parse-server.

ignore this error . this hapen when the wifi switch to off and not the error we talk about it

this network i am in Is work fine and not show the duplicated error

@flovilmart
Copy link
Contributor

From what I can see, the objects are created correctly, and the server responds properly. I’m not sure the issue is coming from the SDK at all. How is your object saved?

@oazya500
Copy link
Author

@flovilmart

From what I can see, the objects are created correctly, and the server responds properly. I’m not sure the issue is coming from the SDK at all. How is your object saved?

at this network all is fine and no error . I upload pic to just see all off my configuration values . I will upload the pic off my error when I am at the network make the error

@oazya500
Copy link
Author

@flovilmart I am able to config Mikrotik router ass the network make problem .

I create firewall filter that blocking incoming paket . when the rule is work 100% . device try to save object 3 times and stop to request any future save . that make object duplicate 3 times at the server

when make rule about 80% random . device try to save object up to save it . if fail 3 times it is stop request to save for some time then try to save it again

all of fails to save object is create duplicate object at the server

screen shot 2017-07-12 at 10 25 07 pm

screen shot 2017-07-12 at 10 25 10 pm

screen shot 2017-07-12 at 9 51 11 pm

screen shot 2017-07-12 at 9 51 52 pm

@oazya500
Copy link
Author

@montymxb if you sure id is unique you can add unique index att mongodb . And you will not need to check dublicated objects aftersave

@acinader
Copy link
Contributor

acinader commented Jul 13, 2017

if your app is controlling a nuke plant, then i think we would want uuid from the server. However, for just about every other application, we can generate uuids on the client and just be ok with the fact that every couple hundred years we'll have a collision. Here's a good writeup: https://stackoverflow.com/questions/105034/create-guid-uuid-in-javascript

In our case, we're generating the uuids on ios and while i didn't write that bit of code (I can look into it), i'm pretty sure that apple exposes some bits to make uuids closer to 100% than the current javascript alternatives in the post above.

if we wanted to get closer to the nuke reliability, we could only look for dupes in a short window, say 30 min - 1 hour (a day?).

the first time i encountered this idea of "practically unique" vs. "prove-ably unique" it gave me a lot of pause...I've since just learned to accept it ;).

@flovilmart
Copy link
Contributor

You’d also put more thoughts on network reliability and redundancy to make sure everything goes to the server. I don’t believe the solution is client side, and you can implement it through a uuid key and validate in Cloud Code that this doesn’t exist.

@oazya500
Copy link
Author

@acinader the solution must be at the server and client side . solution is to make client obtain objectid from server before save object . when the objectid back to client start save all object . this solution will work with Local Datastore with saveEventually

@acinader will save uuid with all objects and that will add size and cost to your database

@flovilmart
Copy link
Contributor

And what if the subsequent save doesn't return? I mean this is getting out of whack really, the solutions are NOT viable. not counting the impact on everyone that don't really care about that mechanism.

@oazya500 IF you need a transactional system, you're not looking at the right technologies.

@acinader
Copy link
Contributor

acinader commented Jul 13, 2017

@flovilmart I'm not advocating anything, just throwing out how we solved the problem. FYI, we already had been generating a uuid on the client for the relevant object anyway so it was a pretty simple fix to just use it in the afterSave to mark dupes.

FYI, the only class where this has been an issue is one with a (potentially large) file associated with it. I never really dug into why it was happening exactly as I just assumed that it was a limitation and worked around it. It would be a lot of wasted queries to do on all classes after all inserts.

I do think that our solution is a very solid one for anyone facing a similar problem for a particular class. If there were enough demand/complaints, it might be worth generalizing the solution so it could be "turned on" on a class by class basis. I am neither advocating that at this point nor volunteering ;)

edit, ok, i went back and read my comment and I was actually advocating, so mea culpa. I'll blame jet lag.

@oazya500
Copy link
Author

@acinader if you have this issue just for specific class . you should search for resone that make it for this class .
like timeout finish before insert object . then you should increase time out
or mongodb take more time to insert object because there a lot off index in this class . you should delete unnecessary index

@flovilmart
Copy link
Contributor

Overall we have to remember that the client is ‘smart’ and will retry by itself if a call fails. Perhaps this is not a desired behavior.

On object creation we could probably ‘undo’ the save if the response times out, because that’s a simple save and the deletion is obvious (delete by Id) in case of a PUT, this is harmless as this will just retry to put the data.

I can have a look at soft rollback on res.timeout.

@oazya500
Copy link
Author

what if the response not timeout . some time response is bad json because network overwrite response to frame that contain ACCESS TO THIS WEBSITE IS DENIED . but object is created

I will upload pics when I am at that network

@flovilmart
Copy link
Contributor

Then probably parse-server is not the right technology for your use case. You want transactional consistency and this can't be guaranteed by parse-server.

@oazya500
Copy link
Author

error is connection reset by peer when network block response and server duplicate objects

untitled
untitled3
untitled2

@flovilmart
Copy link
Contributor

Are you sure it's not your hosting provider that's throttling you????

@oazya500
Copy link
Author

I am sure it is internet provider because all blocked site show the same frame when explorer blocked site . also when search on google about ip 82.114.160.94 it show it is our internet provider ip

screen shot 2017-07-12 at 1 28 20 am

@oazya500
Copy link
Author

any new in this Issues?

@flovilmart
Copy link
Contributor

It is unlikely we’ll address it

@stale
Copy link

stale bot commented Sep 18, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Sep 18, 2018
@stale stale bot closed this as completed Sep 25, 2018
@emanuelmartin
Copy link

Happening to me on every save that has more than 3 objects

@mtrezza
Copy link
Member

mtrezza commented Oct 6, 2021

There are two ways I can think of to mitigate that:

  • Use a custom object ID, generated client-side
  • Use an Idempotency strategy (currently on master, soon to be released in Parse Server 5)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants