Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CacheProvider for future scaling needs #168

Closed
jamiechapman opened this issue Feb 2, 2016 · 7 comments
Closed

CacheProvider for future scaling needs #168

jamiechapman opened this issue Feb 2, 2016 · 7 comments

Comments

@jamiechapman
Copy link

I noticed that whenever a user authenticates with Parse.User.loginIn, they're added to the users array in cache.js and they're only ever removed when a user forcefully signs out (which in our case, is quite rare) or when the node process restarts/crashes.

This might be fine for light traffic loads or for development use — but I can see this potentially becoming a problem for apps with large amounts of traffic as it is likely to gobble up unnecessary RAM. It may also create inconsistency between load balanced nodes/dynos.

I'm proposing at some point in the future (probably not as a priority) we make the switch to Redis or memcached to hold these variables somewhere consistent (especially if load balancing traffic etc). This might also pave the way for some more advanced query caching in the future too to minimise hits to the popular database for queries that don't often change. Heroku has quite a few hosted Redis/Memcached options so it should be fairly trivial for most users to optionally switch this behaviour on if they want.

Happy to contribute but thought I'd open up an issue first so it can be discussed and tagged up as up for grabs.

@gnz00
Copy link

gnz00 commented Feb 2, 2016

Cache needs to be abstracted which a default memory implementation and providers for memcached or redis.

@skinp
Copy link
Contributor

skinp commented Feb 2, 2016

I agree, this in IMO one of the single most needed feature for heavy production usage.
Let's keep the default implementation be the inmem Node cache, but we should extract this to adapters and have official support for at least one of memcache/redis. PRs welcome

@lucianmat
Copy link

👍

@gnz00
Copy link

gnz00 commented Feb 3, 2016

Abstracted out the cache to a CacheProvider class, might be worth looking into: https://github.com/maysale01/parse-server/tree/es6-all-tests-passing

@lucianmat
Copy link

Providers (including cache) should return an promise to resolve, allowing to fetch/refresh data from external storage. Otherwise data on local cache may not be synced easily.

It would be consistent to use Promise and not callbacks with the rest of the code.

Something like

cacheProvider.getApp(appId).then( [....])

this way the cacheProvider may store data external and based on implementation logic choose to refresh or return a local copy...

@gnz00
Copy link

gnz00 commented Feb 3, 2016

+1. The whole interface needs to be redesigned. I was just trying to get the tests passing with ES6 and some basic abstraction. I'll whip something up tomorrow.

@jamiechapman
Copy link
Author

CacheProvider definitely sounds like a step in the right direction for sure.

For one of our production apps we get crazy level of traffic, so we built a custom caching layer that sits ontop of Hosted Parse (we call it C3 — Cloud Code Cache). It's essentially an Objective-C PFCloud category that calls our own server first to check Memcached for stored results of the function call with the same arguments (because for us the results are good enough to be stale for a good hour or so), if we don't have any results in the cache we call Hosted Parse and then cache the results for an hour to serve to the next request. On the mobile side we gracefully fallback to a direct PFCloud direct call if our servers are down/overloaded. The client-side also caches the response locally and doesn't hit the caching servers for the same function/parameter hash until the cache time has expired.

The above is definitely way beyond what we need out of a cache for v1, but I thought I'd share the sort of things we get up to as it's quite exciting to be able to bake this advanced caving stuff right into parse-server itself. Perhaps access to CacheProvider via a module within our custom Cloud Code would be a smart move?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants