Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: provide documentation on row-level ttl workarounds #5647

Closed
awoods187 opened this issue Oct 17, 2019 · 7 comments · Fixed by #7999
Closed

sql: provide documentation on row-level ttl workarounds #5647

awoods187 opened this issue Oct 17, 2019 · 7 comments · Fixed by #7999
Assignees
Labels
C-doc-improvement O-sales-eng Internal source: Sales Engineering P-1 High priority; must be done this release T-missing-info

Comments

@awoods187
Copy link
Contributor

We don't yet support row-level TTL. This feature is useful in that it cuts down on the effort developers need to take to delete and manage data.

In the meantime, we should document workarounds such as using cron jobs to periodically delete certain data in a performance first manner.

cc @rkruze who was asking about this.

@awoods187 awoods187 added A-sql P-2 Normal priority; secondary task labels Oct 17, 2019
@drewdeally
Copy link

@awoods187 any progression on this?

@awoods187
Copy link
Contributor Author

We haven't prioritized the doc work here given the opportunity cost for other work. Do you want to make a case for increasing the prioritization?

@jseldess jseldess added O-sales-eng Internal source: Sales Engineering C-doc-improvement T-missing-info labels Jun 17, 2020
@awoods187 awoods187 added P-1 High priority; must be done this release and removed P-2 Normal priority; secondary task labels Jul 15, 2020
@ericharmeling ericharmeling added P-2 Normal priority; secondary task and removed P-1 High priority; must be done this release labels Jul 15, 2020
@awoods187 awoods187 added P-1 High priority; must be done this release and removed P-2 Normal priority; secondary task labels Jul 15, 2020
@awoods187
Copy link
Contributor Author

I'm making this p-1 and grouping it with various delete issues:
#5592
#4819
#4818

@awoods187
Copy link
Contributor Author

Here's a customer ticket related to needing better recommendations around batch deleting https://cockroachdb.zendesk.com/agent/tickets/5882

@mgartner
Copy link
Contributor

Some guidance on why and how to batch large writes would be great. For example, with Postgres you'd batch writes to avoid holding row-level locks on a huge number of rows for a long time, which would block other concurrent writes to those rows. By batching you reduce the max number of rows locked at any given time, and those locks are held for a shorter time-frame. I'm not sure if CRDB has similar behavior for large writes, but an explanation of this form would be great.

@jordanlewis
Copy link
Member

jordanlewis commented Aug 11, 2020

Since @rohany's change that adds crdb_internal_mvcc_timestamp, we can recommend this pattern:

root@127.0.0.1:55288/defaultdb> delete from t where crdb_internal_mvcc_timestamp < 1597173188127148000.0000000000 LIMIT 10000;

@drewdeally
Copy link

drewdeally commented Aug 11, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-doc-improvement O-sales-eng Internal source: Sales Engineering P-1 High priority; must be done this release T-missing-info
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants