Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want to submit a patch to improve performance when backend is down,but maybe not be promitted. #1241

Open
lsy1990 opened this issue Nov 20, 2020 · 11 comments

Comments

@lsy1990
Copy link

lsy1990 commented Nov 20, 2020

[provide a description of the issue]

Version

[provide output of the nginx -V or openresty -V command from openshift/local terminal]
[provide timestamp of the docker image from docker inspect --format='{{.Created}}' quay.io/3scale/apicast:master ]

Steps To Reproduce
  1. [step 1 (json configuration file, if applies)]
  2. [step 2 (curl commands to reproduce, if applies)]
  3. [step 3]
Current Result
Expected Result
Additional Information
  • [Gist with minimal reproducible configuration, see guidelines for contributing for details]
  • [Gist with nginx log output]

ERROR: Permission to 3scale/APIcast.git denied to lsy1990.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights

@lsy1990
Copy link
Author

lsy1990 commented Nov 20, 2020

To improve the performance when the backend is down.
problem:
when backend is down and batcher's cache is expired,every
request will pass to backend and wait to timeout ,and then make use of
the caching policy's data. This will cause that the business's tps become
slow.
solution:
make use of the caching's data to update bacher's cache,to avoid
every request need to check .

@lsy1990
Copy link
Author

lsy1990 commented Nov 20, 2020

image

@eloycoto
Copy link
Contributor

Hi @lsy1990

Thanks for your issue, make sense what you're reporting, but I need to verify a few thing before submit it.

Let me have a look, and I'll come back to you in the next couple of days.

Regards.

@lsy1990
Copy link
Author

lsy1990 commented Feb 2, 2021

image

@lsy1990
Copy link
Author

lsy1990 commented Feb 3, 2021

Hi @eloycoto Are there some update for this optimization?

@eloycoto
Copy link
Contributor

eloycoto commented Feb 3, 2021

Hi,

sorry the change make sense to me, if you want to make a PR I'm more than happy to merge it.

@lsy1990
Copy link
Author

lsy1990 commented Feb 3, 2021

Got it. I will submit a patch for the change.

@lsy1990
Copy link
Author

lsy1990 commented Feb 4, 2021

image

@lsy1990
Copy link
Author

lsy1990 commented Feb 4, 2021

From 78dcc57f232808d7e26a23d38b80088fee35c9b2 Mon Sep 17 00:00:00 2001
From: lishenyang lishenyang@hisense.com
Date: Thu, 4 Feb 2021 09:01:12 +0800
Subject: [PATCH] update batcher's auth cache based on Authcaching data when
batcher ttl >0 and backend server is down,the tps will be reduced a litte,
better than old policy.


gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua | 1 +
1 file changed, 1 insertion(+)

diff --git a/gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua b/gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua
index 917d7d4f..ab85fbe6 100644
--- a/gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua
+++ b/gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua
@@ -152,6 +152,7 @@ local function handle_backend_error(self, service, transaction, cache_handler)

if cached == 200 then
self.reports_batcher:add(transaction)

  • self.auths_cache:set(transaction, 200)
    else
    -- The caching policy does not store the rejection reason, so we can only
    -- return a generic error.
    --
    2.26.1.windows.1

@thomasmaas
Copy link
Member

thomasmaas commented Feb 5, 2021

@eloycoto
Copy link
Contributor

eloycoto commented Feb 9, 2021

Hi @lsy1990

don't worry, I'll try to do the PR and the integration test later this week, I'll ping you as reviewer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants