You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now sync is done by chunk period, each chunk process in parallel ( as workers as configured ) each one of the measurements , if one measurement fails all the chuck are marked as bad chunk (even though all other measurements has been synced/copies ok) .
Our DB's usually could be one big measurement and others smaller, if processed by chunks all data will be affected if one big measurements impact on all other data. Perhaps per measurment parallel process data will be fastest copied and also recovered by measurement.
This change requires a big refactor.
The text was updated successfully, but these errors were encountered:
Right now sync is done by chunk period, each chunk process in parallel ( as workers as configured ) each one of the measurements , if one measurement fails all the chuck are marked as bad chunk (even though all other measurements has been synced/copies ok) .
Our DB's usually could be one big measurement and others smaller, if processed by chunks all data will be affected if one big measurements impact on all other data. Perhaps per measurment parallel process data will be fastest copied and also recovered by measurement.
This change requires a big refactor.
The text was updated successfully, but these errors were encountered: