Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[apisix] Implement router interface and observer interface #1281

Merged
merged 7 commits into from
Dec 7, 2022

Conversation

Gallardot
Copy link
Contributor

@Gallardot Gallardot commented Oct 13, 2022

close #1074

how to testing

requires a Kubernetes cluster v1.19

install apisix

kubectl create ns apisix

helm repo add apisix https://charts.apiseven.com
helm repo update

helm upgrade -i apisix apisix/apisix \
--namespace apisix \
--set serviceMonitor.enabled=true \
--set apisix.podAnnotations."prometheus\.io/scrape"=true \
--set apisix.podAnnotations."prometheus\.io/port"=9091 \
--set apisix.podAnnotations."prometheus\.io/path"=/apisix/prometheus/metrics \
--set dashboard.enabled=true \
--set ingress-controller.enabled=true \
--set ingress-controller.config.apisix.serviceNamespace=apisix

install flagger

helm upgrade -i flagger flagger/flagger \
--namespace apisix \
--set prometheus.install=true \
--set meshProvider=apisix

update docker image

build flagger image and update to k8s cluster

docker build . 
......

update flagger crd and ClusterRole

apply artifacts/flagger/crd.yaml file to k8s cluster for update crd

edit flagger's ClusterRole

add apisix api group

  - apiGroups:
      - apisix.apache.org
    resources:
      - apisixroutes
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete

bootstrap

kubectl create ns test
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test

Create an apisix route

apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
  name: podinfo
  namespace: test
spec:
  http:
    - backends:
        - serviceName: podinfo
          servicePort: 80
      match:
        hosts:
          - foobar.com
        methods:
          - GET
        paths:
          - /*
      name: method
      plugins:
        - name: prometheus
          enable: true
          config:
            disable: false
            prefer_name: true

create a canary custom resource

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  provider: apisix
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  # apisix route reference
  routeRef:
    apiVersion: apisix.apache.org/v2
    kind: ApisixRoute
    name: podinfo
  # HPA reference (optional)
  autoscalerRef:
    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    name: podinfo
  # the maximum time in seconds for the canary deployment
  # to make progress before it is rollback (default 600s)
  progressDeadlineSeconds: 60
  service:
    # ClusterIP port number
    port: 80
    # container port number or name
    targetPort: 9898
  analysis:
    # schedule interval (default 60s)
    interval: 10s
    # max number of failed metric checks before rollback
    threshold: 10
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 50
    # canary increment step
    # percentage (0-100)
    stepWeight: 5
    # APISIX Prometheus checks
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      thresholdRange:
        min: 99
      interval: 1m
    - name: request-duration
      # builtin Prometheus check
      # maximum req duration P99
      # milliseconds
      thresholdRange:
        max: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        type: rollout
        metadata:
          cmd: |-
              hey -z 1m -q 10 -c 2 -h2 -host foobar.com http://apisix-gateway.apisix/api/info

After a couple of seconds Flagger will create the canary objects:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
apisixroute/podinfo
canary.flagger.app/podinfo

# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
apisixroute/podinfo-podinfo-canary

automated-canary-promotion

update podinfo images

kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1

watch event

kubectl -n test describe canary/podinfo

Status:
  Canary Weight:  0
  Conditions:
    Last Transition Time:  2022-11-08T14:41:44Z
    Last Update Time:      2022-11-08T14:41:44Z
    Message:               Canary analysis completed successfully, promotion finished.
    Reason:                Succeeded
    Status:                True
    Type:                  Promoted
  Failed Checks:           1
  Iterations:              0
  Last Applied Spec:       69ff7bc9b4
  Last Promoted Spec:      69ff7bc9b4
  Last Transition Time:    2022-11-08T14:41:44Z
  Phase:                   Succeeded
  Tracked Configs:
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  4m54s                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less than desired generation
  Warning  Synced  4m45s                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available
  Normal   Synced  4m35s (x3 over 4m54s)  flagger  all the metrics providers are available!
  Normal   Synced  4m34s                  flagger  Initialization done! podinfo.test
  Normal   Synced  3m45s                  flagger  New revision detected! Scaling up podinfo.test
  Warning  Synced  3m35s                  flagger  canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 2 (readyThreshold 100%) updated replicas are available
  Warning  Synced  3m25s                  flagger  canary deployment podinfo.test not ready: waiting for rollout to finish: 1 of 2 (readyThreshold 100%) updated replicas are available
  Normal   Synced  3m15s                  flagger  Starting canary analysis for podinfo.test
  Normal   Synced  3m15s                  flagger  Advance podinfo.test canary weight 5
  Warning  Synced  3m5s                   flagger  Halt advancement no values found for apisix metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
  Normal   Synced  2m52s                  flagger  Advance podinfo.test canary weight 10
  Normal   Synced  2m45s                  flagger  Advance podinfo.test canary weight 15
  Normal   Synced  2m35s                  flagger  Advance podinfo.test canary weight 20
  Normal   Synced  2m24s                  flagger  Advance podinfo.test canary weight 25
  Warning  Synced  75s                    flagger  podinfo-primary.test not ready: waiting for rollout to finish: 1 old replicas are pending termination
  Warning  Synced  65s                    flagger  podinfo-primary.test not ready: waiting for rollout to finish: 1 of 2 (readyThreshold 100%) updated replicas are available
  Normal   Synced  54s (x7 over 2m15s)    flagger  (combined from similar events): Routing all traffic to primary
watch kubectl get canaries --all-namespaces

NAMESPACE   NAME      STATUS	  WEIGHT   LASTTRANSITIONTIME
test        podinfo   Succeeded   0        2022-11-08T14:41:44Z

@Gallardot Gallardot changed the title [apisix] Implement router interface and observer interface [WIP][apisix] Implement router interface and observer interface Oct 13, 2022
@tao12345666333
Copy link

thanks!

@Gallardot Gallardot marked this pull request as ready for review October 13, 2022 10:23
@Gallardot Gallardot changed the title [WIP][apisix] Implement router interface and observer interface [apisix] Implement router interface and observer interface Oct 13, 2022
@tao12345666333
Copy link

DCO error

@Gallardot Gallardot force-pushed the apisix branch 2 times, most recently from 06e3b06 to 993108f Compare November 9, 2022 07:10
@Gallardot Gallardot changed the title [apisix] Implement router interface and observer interface [WIP][apisix] Implement router interface and observer interface Nov 9, 2022
@Gallardot Gallardot force-pushed the apisix branch 4 times, most recently from e91cbe8 to 3f26fdb Compare November 10, 2022 11:31
@Gallardot
Copy link
Contributor Author

@tao12345666333 @stefanprodan PTAL.

I think this pr is ready for the review now.
I made the following improvements:

  1. Unit testing was added
  2. Fixed some bugs
  3. Prometheus with flagger's charts simplifies testing and deployment
  4. Updated the installation test procedure document above.

@Gallardot Gallardot changed the title [WIP][apisix] Implement router interface and observer interface [apisix] Implement router interface and observer interface Nov 10, 2022
pkg/apis/apisix/v2/types.go Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
@Gallardot
Copy link
Contributor Author

@aryan9600 I have added e2e tests. PTAL.

pkg/router/apisix.go Outdated Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented Nov 22, 2022

Codecov Report

Base: 54.32% // Head: 54.24% // Decreases project coverage by -0.07% ⚠️

Coverage data is based on head (7000171) compared to base (ec7066b).
Patch coverage: 50.00% of modified lines in pull request are covered.

❗ Current head 7000171 differs from pull request most recent head 6c29c21. Consider uploading reports for the commit 6c29c21 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1281      +/-   ##
==========================================
- Coverage   54.32%   54.24%   -0.08%     
==========================================
  Files          82       84       +2     
  Lines        9834    10016     +182     
==========================================
+ Hits         5342     5433      +91     
- Misses       3853     3927      +74     
- Partials      639      656      +17     
Impacted Files Coverage Δ
pkg/metrics/observers/factory.go 0.00% <0.00%> (ø)
pkg/router/factory.go 0.00% <0.00%> (ø)
pkg/controller/scheduler_metrics.go 36.90% <40.00%> (+0.06%) ⬆️
pkg/router/apisix.go 52.73% <52.73%> (ø)
pkg/metrics/observers/apisix.go 57.14% <57.14%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Member

@aryan9600 aryan9600 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add/update the following:

  • docs
  • kustomize/base/flagger/rbac.yaml
  • a kustomize/apisix directory containing a kustomization for installing flagger using apisix as the provider

you can use this PR as a reference: #1108

test/apisix/test-canary.sh Show resolved Hide resolved
pkg/router/apisix.go Outdated Show resolved Hide resolved
@Gallardot
Copy link
Contributor Author

please add/update the following:

  • docs
  • kustomize/base/flagger/rbac.yaml
  • a kustomize/apisix directory containing a kustomization for installing flagger using apisix as the provider

you can use this PR as a reference: #1108

I've done the following:

  1. Add and update documents
  2. Fix kustomize
  3. Fix the e2e issue
  4. Optimize canary apisix route handling

@aryan9600 @tao12345666333 PTAL.

@Gallardot Gallardot requested review from tao12345666333 and aryan9600 and removed request for tao12345666333 and aryan9600 November 25, 2022 04:08
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
docs/gitbook/tutorials/apisix-progressive-delivery.md Outdated Show resolved Hide resolved
@Gallardot Gallardot requested a review from aryan9600 November 25, 2022 07:51
@Gallardot Gallardot force-pushed the apisix branch 2 times, most recently from 69caaf7 to f3721d1 Compare November 25, 2022 17:10
Copy link
Member

@aryan9600 aryan9600 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! thanks a lot @Gallardot 🙇

@stefanprodan stefanprodan added the kind/feature Feature request label Dec 6, 2022
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thanks @Gallardot 🏅

PS. Are there any plans to implement all the other deploy strategies like A/B and mirroring?

Signed-off-by: Gallardot <tttick@163.com>
Signed-off-by: Gallardot <tttick@163.com>
@Gallardot
Copy link
Contributor Author

LGTM

Thanks @Gallardot 🏅

PS. Are there any plans to implement all the other deploy strategies like A/B and mirroring?

The answer is yes. But currently A/B testing via the apisix-ingress-controller is not very friendly for the user. I had a discussion with @tao12345666333 and we will improve the apisix-ingress-controller first.

@tao12345666333
Copy link

yes! I will do some planning with the community in the next milestone.
Let's provide that functionality in a more user-friendly way.
(Currently it does have that capability, but it's not user friendly.)

Signed-off-by: Gallardot <tttick@163.com>
Signed-off-by: Gallardot <tttick@163.com>
Signed-off-by: Gallardot <tttick@163.com>
Signed-off-by: Gallardot <tttick@163.com>
Signed-off-by: Gallardot <tttick@163.com>
@aryan9600 aryan9600 merged commit 2dd48c6 into fluxcd:main Dec 7, 2022
@Gallardot Gallardot deleted the apisix branch December 7, 2022 14:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Feature request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support Apache APISIX Canary Deployments
5 participants