Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eureka registration time lag with ribbon or springclound? #3263

Closed
yugj opened this issue Nov 2, 2018 · 11 comments
Closed

eureka registration time lag with ribbon or springclound? #3263

yugj opened this issue Nov 2, 2018 · 11 comments

Comments

@yugj
Copy link

yugj commented Nov 2, 2018

Hi,i want to reduce eureka registration state synchronization time,i knew eureka client/server ribbon has local cache for eureka registration info,i try to config some related configuration, like this

Eureka Server

spring:
  application:
    name: eureka
server:
  port: 9000
eureka:
  environment: yugj-test
  client:
    register-with-eureka: false
    fetch-registry: false
    serviceUrl:
      defaultZone: http://localhost:9000/eureka/
  instance:
    prefer-ip-address: true
    instance-id: ${spring.cloud.client.ipAddress}:${server.port}
  server:
    enable-self-preservation: false #default true
    eviction-interval-timer-in-ms: 10000 #default 60000
    response-cache-update-interval-ms: 3000 #default 30000
    use-read-only-response-cache: false
endpoints:
  shutdown:
    enabled: true
    sensitive: false

Eureka Client A( ribbon and feign as http client)

server:
  port: 9010

spring:
  application:
    name: sv-ribbon
ribbon:
  MaxConnectionsPerHost: 1000
  MaxTotalConnections: 3000
  ReadTimeout: 10000
  ConnectTimeout: 2000
  MaxAutoRetries: 0
  MaxAutoRetriesNextServer: 0
  ServerListRefreshInterval: 1000
  eureka:
    enabled: true
eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:9000/eureka/
  healthcheck:
     enabled: true
  instance:
    lease-renewal-interval-in-seconds: 5 
    lease-expiration-duration-in-seconds: 20 
    registry-fetch-interval-seconds: 5 
    metadata-map:
      cluster: main
    prefer-ip-address: true
    instance-id: ${spring.cloud.client.ipAddress}:${server.port}

**Eureka client B(privide some rest api) **

server:
  port: 9006

spring:
  profiles:
    active: @profile.id@
  application:
    name: rest-server

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:9000/eureka/
  healthcheck:
     enabled: true
  instance:
    lease-renewal-interval-in-seconds: 5
    lease-expiration-duration-in-seconds: 15
    registry-fetch-interval-seconds: 5
    metadata-map:
      cluster: main
    prefer-ip-address: true
    instance-id: ${spring.cloud.client.ipAddress}:${server.port}
management:
  security:
    enabled: false
log:
  home: /data/logs/${spring.profiles.active}

question:

i turn off eureka server read only cache: eureka.server.use-read-only-response-cache: false

i lowering eureka client registry fetch interval:eureka.instance.registry-fetch-interval-seconds: 5

i lowering ribbon server list refresh time:ribbon.ServerListRefreshInterval: 1000

i think it should be less 10 seconds that client a can get the latest registration info;

but when i execute: curl -i -X PUT http://localhost:9000/eureka/apps/rest-server/192.168.1.138:9006/status?value=OUT_OF_SERVICE

ps:rest-server is client B

client A not alway synchronize client B registration info within 10 seconds;sometimes up to 30 seconds

(i continued send request to client A ,client send request to client B via ribbon and feigin

when i set client B OUT_OF_SERVICE, count how long client A failed to get response)

i use eureka rest api and get registry immdiately; it can be ribbon or springclound something wrong?

or is these some configuration i missed?

thank you !

@ryanjbaxter
Copy link
Contributor

Please learn how to format code on GitHub and read this section of the documentation.

@yugj
Copy link
Author

yugj commented Nov 2, 2018

Please learn how to format code on GitHub and read this section of the documentation.

sorrry ,Paste error

@yugj
Copy link
Author

yugj commented Nov 2, 2018

@ryanjbaxter thx for your reply,
i set eureka.instance.registry-fetch-interval-seconds: 5
after 3 heartbeats + 5 seconds ribbon local cache , it seem should be less than 20 seconds is it right ?
but sometimes it take almost 30 seconds before my client A can send request to client B

@ryanjbaxter
Copy link
Contributor

What if you lower it to something lower?

@yugj
Copy link
Author

yugj commented Nov 2, 2018

What if you lower it to something lower?

thanks to the time lag, my cd system need to offline target service(set OUT_OF_SERVICE state) and wait other client has no registry of this service before restart this service, i want to know exactly eureka registration state synchronization time, maybe i can lower the wait time;
in the other hand, lower the time may quick recovery failure in case of some instance died
is it something wrong lower it to 5 seconds (my system has 15 micro service ,each service has 15 instances)?

@ryanjbaxter
Copy link
Contributor

I just wanted you to test that if you lower it to less than 5 seconds that you would then see an overall lower sync time (less than 30 seconds). Your math seems correct but since these things are running in threads that run periodically it may not be exactly as you calculate.

@brenuart
Copy link
Contributor

brenuart commented Nov 2, 2018

Beware that changing how frequently the eureka client sends its heartbeats to the server (other than every 30s) is likely to cause very strange results. Please read the following issue before proceeding: #373

@yugj
Copy link
Author

yugj commented Nov 3, 2018

Beware that changing how frequently the eureka client sends its heartbeats to the server (other than every 30s) is likely to cause very strange results. Please read the following issue before proceeding: #373

hi, i dont know the "very strange results", i read that doc before and it helps;

i lower it to 5 seconds and publish to my test environment, i planned to publish this lower heartbeats

to production environment ,can you point some bad influences that

can be ?,maybe i should not lower it

@brenuart
Copy link
Contributor

brenuart commented Nov 3, 2018

At the time I wrote that post, the Eureka server expected clients to renew their leases every 30s - not more, not less. If you have a single service, the Eureka registry expects to receive 2 heartbeats per minute. With 2 services, it expects 4 heartbeats. Etc.

The registry enters in "self preservation" mode when the actual number of heartbeats received during a given period falls below 80% of the expected number (by default). So, in the above example, if the period is one minute (it is actually longer but I don't remember how long it is) that mode is activated when 3 heartbeats are received instead of the 4 expected.

Suppose now that your clients are sending an heartbeat (renewal) every 10 seconds, the registry will receive 6 heartbeats per minute per client, ie 12 heartbeats for 2 clients whereas it expects only 4... As you understand, the "self preservation" feature is now broken...

All this because most of the registry's internal logic is hardcoded with a 30s renewal period from the client. As far as I remember, "Self preservation" isn't the only feature affected. Unfortunately I can"t remember now what were the others... :(

@holy12345
Copy link
Contributor

Hi @brenuart

First

The registry enters in "self preservation" mode when the actual number of heartbeats received during a given period falls below 80% of the expected number (by default)

Maybe not 80%. Its 85%

Second

Suppose now that your clients are sending an heartbeat (renewal) every 10 seconds, the registry will receive 6 heartbeats per minute per client, ie 12 heartbeats for 2 clients whereas it expects only 4... As you understand, the "self preservation" feature is now broken...

For this issue the in eureka 1.9.4 version has already fix it. You can see Netflix/eureka#1093

Thanks

@spencergibb
Copy link
Member

We have incorporated that version of eureka

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants