-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensuring read_multi works with fragment cache. #1814
Conversation
By analyzing the blame information on this pull request, we identified @bf4, @kevintyll and @beauby to be potential reviewers |
@@ -197,7 +197,7 @@ def object_cache_keys(collection_serializer, adapter_instance, include_directive | |||
def object_cache_key(serializer, adapter_instance) | |||
return unless serializer.present? && serializer.object.present? | |||
|
|||
serializer.class.cache_enabled? ? serializer.cache_key(adapter_instance) : nil | |||
(serializer.class.cache_enabled? || serializer.class.fragment_cache_enabled?) ? serializer.cache_key(adapter_instance) : nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You did push a lot of code that week, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did I introduce it? It seemed always broken to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hahahhahaha, when you said Ooops I figured you had. Looks like I got it wrong.
non_cached_hash.merge! resource_relationships({}, { include_directive: include_directive }, adapter_instance) | ||
non_cached_hash | ||
end | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like it was removed from fetch_attributes_fragment
unchanged? was it a rubocop thing? The method needs refactor, I agree.. so sell me on why it's better with the non_cached_hash
code moved to its own method (but not the cached_hash
)?
Also, when adding methods, since this is in such flux, please prepend with comment # @api private
and place them after where they are used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rbocop thing. I would rather not change what it was, but it kept complaining about code complexity (the nested fetches).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, so just # rubocop:disable nameofwarnign
at the top of the method and then # rubocop:enable nameofwarnign
at the bottom as this is intentional
@zaaroth you mentioned to me in chat something about etags, as well? |
@bf4 Yeah, in my environment I noticed the serializers were running twice. One for the request and another for ETag calculation. When I did render :json => object, serializer: Serializer this happened. That do not happens when you pass something like render :json => some_string_in_json_format |
Benchmark results: Before:{
"commit_hash": "32a3b53",
"version": "0.10.1",
"rails_version": "4.2.6",
"benchmark_run[environment]": "2.2.4p230",
"runs": [
{
"benchmark_type[category]": "caching on: caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 1006.698,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1268
},
{
"benchmark_type[category]": "caching on: fragment caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 895.581,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1365
},
{
"benchmark_type[category]": "caching on: non-caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 964.903,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1232
},
{
"benchmark_type[category]": "caching off: caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 885.642,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1268
},
{
"benchmark_type[category]": "caching off: fragment caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 753.483,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1365
},
{
"benchmark_type[category]": "caching off: non-caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 777.386,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1232
}
]
} After:{
"commit_hash": "bcf3358",
"version": "0.10.1",
"rails_version": "4.2.6",
"benchmark_run[environment]": "2.2.4p230",
"runs": [
{
"benchmark_type[category]": "caching on: caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 999.196,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1268
},
{
"benchmark_type[category]": "caching on: fragment caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 906.462,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1365
},
{
"benchmark_type[category]": "caching on: non-caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 953.049,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1232
},
{
"benchmark_type[category]": "caching off: caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 871.369,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1268
},
{
"benchmark_type[category]": "caching off: fragment caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 746.185,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1365
},
{
"benchmark_type[category]": "caching off: non-caching serializers: gc off",
"benchmark_run[result][iterations_per_second]": 777.62,
"benchmark_run[result][total_allocated_objects_per_iteration]": 1232
}
]
} |
@bf4 are these benchmarks taking into account cache latency? In high latency networks in which a single fetch might take more than 2ms the roudtrips to fetch and set or several fetches starts to add up |
Nope... I have another branch for comsidering that |
@bf4 ah! In this situation is it "safe" to expect close results then. I was a bit worried at first glance. |
@zaaroth released 0.10.2 https://twitter.com/hazula/status/750394854115450880 |
Purpose
AMS was not reading multiple values from the cache at once when the serializer was using fragment cache.
Changes
Changed the flow slightly to find the keys for fragment_cache_enabled? serializers. Then ensured the cached attributes were forwarded to the appropriate method.
Caveats
Hopefully the reviewers would be able to enlighten me.
Related GitHub issues
#827
Additional helpful information
All tests did pass and I've created one that fails in 0.10.1 (not reading multiple values on a fragment cache enabled serializer) and is now passing. I also did try this code in my dev environment and was able to confirm AMS was using mget in the collection being serialized an all its has_one relationships.
AMS should consider using mset as well.