Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensuring read_multi works with fragment cache. #1814

Merged
merged 5 commits into from
Jun 23, 2016
Merged

Conversation

zaaroth
Copy link
Contributor

@zaaroth zaaroth commented Jun 19, 2016

Purpose

AMS was not reading multiple values from the cache at once when the serializer was using fragment cache.

Changes

Changed the flow slightly to find the keys for fragment_cache_enabled? serializers. Then ensured the cached attributes were forwarded to the appropriate method.

Caveats

Hopefully the reviewers would be able to enlighten me.

Related GitHub issues

#827

Additional helpful information

All tests did pass and I've created one that fails in 0.10.1 (not reading multiple values on a fragment cache enabled serializer) and is now passing. I also did try this code in my dev environment and was able to confirm AMS was using mget in the collection being serialized an all its has_one relationships.
AMS should consider using mset as well.

@mention-bot
Copy link

By analyzing the blame information on this pull request, we identified @bf4, @kevintyll and @beauby to be potential reviewers

@@ -197,7 +197,7 @@ def object_cache_keys(collection_serializer, adapter_instance, include_directive
def object_cache_key(serializer, adapter_instance)
return unless serializer.present? && serializer.object.present?

serializer.class.cache_enabled? ? serializer.cache_key(adapter_instance) : nil
(serializer.class.cache_enabled? || serializer.class.fragment_cache_enabled?) ? serializer.cache_key(adapter_instance) : nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did push a lot of code that week, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did I introduce it? It seemed always broken to me

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hahahhahaha, when you said Ooops I figured you had. Looks like I got it wrong.

non_cached_hash.merge! resource_relationships({}, { include_directive: include_directive }, adapter_instance)
non_cached_hash
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like it was removed from fetch_attributes_fragment unchanged? was it a rubocop thing? The method needs refactor, I agree.. so sell me on why it's better with the non_cached_hash code moved to its own method (but not the cached_hash)?

Also, when adding methods, since this is in such flux, please prepend with comment # @api private and place them after where they are used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rbocop thing. I would rather not change what it was, but it kept complaining about code complexity (the nested fetches).

Copy link
Member

@bf4 bf4 Jun 20, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, so just # rubocop:disable nameofwarnign at the top of the method and then # rubocop:enable nameofwarnign at the bottom as this is intentional

@bf4
Copy link
Member

bf4 commented Jun 20, 2016

@zaaroth you mentioned to me in chat something about etags, as well?

@zaaroth
Copy link
Contributor Author

zaaroth commented Jun 20, 2016

@bf4 Yeah, in my environment I noticed the serializers were running twice. One for the request and another for ETag calculation.

When I did render :json => object, serializer: Serializer this happened. That do not happens when you pass something like render :json => some_string_in_json_format

@bf4 bf4 merged commit bcf3358 into rails-api:master Jun 23, 2016
@bf4
Copy link
Member

bf4 commented Jun 23, 2016

Benchmark results:

Before:

{
  "commit_hash": "32a3b53",
  "version": "0.10.1",
  "rails_version": "4.2.6",
  "benchmark_run[environment]": "2.2.4p230",
  "runs": [
    {
      "benchmark_type[category]": "caching on: caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 1006.698,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1268
    },
    {
      "benchmark_type[category]": "caching on: fragment caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 895.581,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1365
    },
    {
      "benchmark_type[category]": "caching on: non-caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 964.903,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1232
    },
    {
      "benchmark_type[category]": "caching off: caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 885.642,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1268
    },
    {
      "benchmark_type[category]": "caching off: fragment caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 753.483,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1365
    },
    {
      "benchmark_type[category]": "caching off: non-caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 777.386,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1232
    }
  ]
}

After:

{
  "commit_hash": "bcf3358",
  "version": "0.10.1",
  "rails_version": "4.2.6",
  "benchmark_run[environment]": "2.2.4p230",
  "runs": [
    {
      "benchmark_type[category]": "caching on: caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 999.196,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1268
    },
    {
      "benchmark_type[category]": "caching on: fragment caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 906.462,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1365
    },
    {
      "benchmark_type[category]": "caching on: non-caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 953.049,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1232
    },
    {
      "benchmark_type[category]": "caching off: caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 871.369,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1268
    },
    {
      "benchmark_type[category]": "caching off: fragment caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 746.185,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1365
    },
    {
      "benchmark_type[category]": "caching off: non-caching serializers: gc off",
      "benchmark_run[result][iterations_per_second]": 777.62,
      "benchmark_run[result][total_allocated_objects_per_iteration]": 1232
    }
  ]
}

@zaaroth
Copy link
Contributor Author

zaaroth commented Jun 23, 2016

@bf4 are these benchmarks taking into account cache latency? In high latency networks in which a single fetch might take more than 2ms the roudtrips to fetch and set or several fetches starts to add up

@bf4
Copy link
Member

bf4 commented Jun 23, 2016

Nope... I have another branch for comsidering that

@zaaroth
Copy link
Contributor Author

zaaroth commented Jun 23, 2016

@bf4 ah! In this situation is it "safe" to expect close results then. I was a bit worried at first glance.

@bf4
Copy link
Member

bf4 commented Jul 5, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants