Skip to content

Commit

Permalink
Merge branch 'main' into jgmw/chore-tarsync-watch-mode
Browse files Browse the repository at this point in the history
  • Loading branch information
Josh-Walker-GM committed Aug 23, 2024
2 parents ac13dec + 9fdf50e commit a130d41
Show file tree
Hide file tree
Showing 720 changed files with 281,726 additions and 312 deletions.
72 changes: 30 additions & 42 deletions docs/versioned_docs/version-3.x/services.md
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,7 @@ validate(input.usPhone, 'US Phone Number', {
* `message`: a custom error message if validation fails
```jsx
validate(input.usPhone, {
format: {
Expand Down Expand Up @@ -711,7 +712,6 @@ const createUser = (input) => {
```
You can provide the PrismaClient to be used for the transaction and callback.
```jsx
import { db } from 'src/lib/db'

Expand Down Expand Up @@ -770,7 +770,7 @@ Why use a cache? If you have an expensive or time-consuming process in your serv
:::info What about GraphQL caching?
You could also cache data at the [GraphQL layer](https://community.redwoodjs.com/t/guide-power-of-graphql-caching/2624) which has some of the same benefits. Using Envelop plugins you can add a response cache *after* your services (resolver functions in the context of GraphQL) run - with a global configuration.
You could also cache data at the [GraphQL layer](https://community.redwoodjs.com/t/guide-power-of-graphql-caching/2624) which has some of the same benefits. Using Envelop plugins you can add a response cache _after_ your services (resolver functions in the context of GraphQL) run - with a global configuration.
However, by placing the cache one level "lower," at the service level, you get the benefit of caching even when one service calls another internally, or when a service is called via another serverless function, and finer grained control of what you're caching.
Expand All @@ -782,7 +782,7 @@ In our example above you could cache the GraphQL query for the most popular prod
As of this writing, Redwood ships with clients for the two most popular cache backends: [Memcached](https://memcached.org/) and [Redis](https://redis.io/). Service caching wraps each of these in an adapter, which makes it easy to add more clients in the future. If you're interested in adding an adapter for your favorite cache client, [open a issue](https://github.com/redwoodjs/redwood/issues) and tell us about it! Instructions for getting started with the code are [below](#creating-your-own-client).
:::info
::: info
If you need to access functionality in your cache client that the `cache()` and `cacheFindMany()` functions do not handle, you can always get access to the underlying raw client library and use it however you want:
Expand All @@ -795,12 +795,7 @@ export const updatePost = async ({ id, input }) => {
where: { id },
})
// highlight-next-line
await cacheClient.MSET(
`post-${id}`,
JSON.stringify(post),
`blogpost-${id}`,
JSON.stringify(post)
)
await cacheClient.MSET(`post-${id}`, JSON.stringify(post), `blogpost-${id}`, JSON.stringify(post))

return post
}
Expand Down Expand Up @@ -835,7 +830,7 @@ What if we add a "type" into the cache key, so we know what type of thing we're
One solution would be to put all of the data that we care about changing into the key, like: `product-41442-${description}`. The problem here is that keys can only be so long (in Memcached it's 250 bytes). Another option could be to hash the entire product object and use that as the key (this can encompass the `product` part of the key as well as the ID itself, since *any* data in the object being different will result in a new hash):
```js
import { md5 } from 'blueimp-md5'
import { md5 } from "blueimp-md5"

cache(md5(JSON.stringify(product)), () => {
// ...
Expand All @@ -855,7 +850,6 @@ cache(product, () => {
// ...
})
```
:::
One drawback to this key is in potentially responding to *too many* data changes, even ones we don't care about caching. Imagine that a product has a `views` field that tracks how many times it has been viewed in the browser. This number will be changing all the time, but if we don't display that count to the user then we're constantly re-creating the cache for the product even though no data the user will see is changing. There's no way to tell Prisma "set the `updatedAt` when the record changes, but not if the `views` column changes." This cache key is too variable. One solution would be to move the `views` column to another table with a `productId` pointing back to this record. Now the `product` is back to just containing data we care about caching.
Expand All @@ -879,7 +873,7 @@ How does that last one work? We get a list of all the keys and then apply a hash
```javascript
const product = db.product.findUnique({ where: { id } })
const columns = Object.keys(product) // ['id', 'name', 'sku', ...]
const hash = md5(columns.join(',')) // "e4d7f1b4ed2e42d15898f4b27b019da4"
const hash = md5(columns.join(',')) // "e4d7f1b4ed2e42d15898f4b27b019da4"

cache(`v1-product-${hash}-${id}-${updatedAt}`, () => {
// ...
Expand All @@ -895,13 +889,9 @@ Note that this has the side effect of having to select at least one record from
You can skirt these issues about what data is changing and what to include or not include in the key by just setting an expiration time on this cache entry. You may decide that if a change is made to a product, it's okay if users don't see the change for, say, an hour. In this case just set the expiration time to 3600 seconds and it will automatically be re-built, whether something changed in the record or not:
```js
cache(
`product-${id}`,
() => {
// ...
},
{ expires: 3600 }
)
cache(`product-${id}`, () => {
// ...
}, { expires: 3600 })
```
This leads to your product cache being rebuilt every hour, even though you haven't made any changes that are of consequence to the user. But that may be we worth the tradeoff versus rebuilding the cache when *no* useful data has changed (like the `views` column being updated).
Expand Down Expand Up @@ -991,7 +981,7 @@ The second usage of the logger argument:
```js
export const { cache, cacheFindMany } = createCache(client, {
logger,
timeout: 500,
timeout: 500
})
```
Expand All @@ -1012,21 +1002,17 @@ Use this function when you want to cache some data, optionally including a numbe
```js
// cache forever
const posts = () => {
const post = ({ id }) => {
return cache(`posts`, () => {
return db.post.findMany()
db.post.findMany()
})
}

// cache for 1 hour
const posts = () => {
return cache(
`posts`,
() => {
return db.post.findMany()
},
{ expires: 3600 }
)
const post = ({ id }) => {
return cache(`posts`, () => {
db.post.findMany()
}, { expires: 3600 })
}
```
Expand All @@ -1035,15 +1021,15 @@ Note that a key can be a string or an array:
```js
const post = ({ id }) => {
return cache(`posts-${id}-${updatedAt.getTime()}`, () => {
return db.post.findUnique({ where: { id } })
db.post.findMany()
})
}

// or

const post = ({ id }) => {
return cache(['posts', id, updatedAt.getTime()], () => {
return db.post.findUnique({ where: { id } })
return cache(['posts', id, updatedAt.getTime()], () => {
db.post.findMany()
})
}
```
Expand Down Expand Up @@ -1071,7 +1057,7 @@ The above is the simplest usage example. If you need to pass a `where`, or any o
```js
const post = ({ id }) => {
return cacheFindMany(`users`, db.user, {
conditions: { where: { roles: 'admin' } },
conditions: { where: { roles: 'admin' } }
})
}
```
Expand All @@ -1082,22 +1068,24 @@ This is functionally equivalent to the following:
const latest = await db.user.findFirst({
where: { roles: 'admin' } },
orderBy: { updatedAt: 'desc' },
select: { id: true, updatedAt: true }
select: { id: true, updatedAt: true
})

return cache(`posts-${latest.id}-${latest.updatedAt.getTime()}`, () => {
return db.post.findMany({ where: { roles: 'admin' } })
db.post.findMany({ where: { roles: 'admin' } })
})
```
If you also want to pass an `expires` option, do it in the same object as `conditions`:
```js
const post = ({ id }) => {
return cacheFindMany(`users`, db.user, {
conditions: { where: { roles: 'admin' } },
expires: 86400,
})
return cacheFindMany(
`users`, db.user, {
conditions: { where: { roles: 'admin' } },
expires: 86400
}
)
}
```
Expand Down Expand Up @@ -1126,16 +1114,16 @@ const updateUser = async ({ id, input }) => {
})
```
:::warning
:::caution
When explicitly deleting cache keys like this you could find yourself going down a rabbit hole. What if there is another service somewhere that also updates user? Or another service that updates an organization, as well as all of its underlying child users at the same time? You'll need to be sure to call `deleteCacheKey()` in these places as well. As a general guideline, it's better to come up with a cache key that encapsulates any triggers for when the data has changed (like the `updatedAt` timestamp, which will change no matter who updates the user, anywhere in your codebase).
Scenarios like this are what people are talking about when they say that caching is hard!
:::
### Testing what you cache
### Testing what you cache
We wouldn't just give you all of these caching APIs and not show you how to test it right? You'll find all the details in the [Caching section in the testing doc](testing.md#testing-caching).
### Creating Your Own Client
Expand Down
Loading

0 comments on commit a130d41

Please sign in to comment.