Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add method to restart stream when retry strategy is exhausted #88

Closed
wants to merge 1 commit into from

Conversation

Marenz
Copy link
Contributor

@Marenz Marenz commented Sep 26, 2024

No description provided.

Signed-off-by: Mathias L. Baumann <mathias.baumann@frequenz.com>
@Marenz Marenz requested a review from a team as a code owner September 26, 2024 09:01
@Marenz Marenz requested a review from shsms September 26, 2024 09:01
@github-actions github-actions bot added part:docs Affects the documentation part:tests Affects the unit, integration and performance (benchmarks) tests part:code Affects the code in general labels Sep 26, 2024
@Marenz
Copy link
Contributor Author

Marenz commented Sep 26, 2024

Do we really need this? Why not just setting the retry limit to None (which is the default BTW)?

I am setting it to None. But how I am ever gonna restart that stream now?

@Marenz

This comment was marked as outdated.

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

I am setting it to None. But how I am ever gonna restart that stream now?

Why do you want to restart it if it restarts automatically continuously. Maybe you can describe the use case you have in mind more in detail?

No it's not?!

Yes, indefinitely.

self._retry_strategy = (
retry.LinearBackoff() if retry_strategy is None else retry_strategy.copy()
)

class LinearBackoff(Strategy):
"""Provides methods for calculating the interval between retries."""
def __init__(
self,
interval: float = DEFAULT_RETRY_INTERVAL,
jitter: float = DEFAULT_RETRY_JITTER,
limit: int | None = None,

@Marenz
Copy link
Contributor Author

Marenz commented Sep 26, 2024

Well, I want an exception to be thrown when the connection fails.

Then I handle it etc.. and then I want to try it again. But i can't, when the retry strategy is exhausted, the whole streamer instance is from that point on useless.

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

Then the documentation is wrong :P

No, why? It says it retries indefinitely.

Well, I want an exception to be thrown when the connection fails.

So you don't want to retry indefinitely, you want to have more control, is that the use case?

@Marenz
Copy link
Contributor Author

Marenz commented Sep 26, 2024

Yes, that's the usecase. Remember, we wanted to put the handling a level higher?

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

OK, I guess I was confused by the PR title, because in this case you don't want a retry strategy at all, right?

@Marenz
Copy link
Contributor Author

Marenz commented Sep 26, 2024

Yes. but I can have that already when I just say "limit=0".
The title just refers to the condition for when it's needed: When the retry condition is exhausted and the channel is closed.

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

OK, looking at it again with the new information, what would be the advantage of being able to restart it compared to create a new streamer? You are rebuilding the whole internal state (recreating the channel, tasks, etc.), so from the efficiency point of view I guess it doesn't make a lot of difference.

For your use case it would be more inconvenient to re-create the streamer instead? In any this PR is fixing a sort of bug, it is not great that if you call start() after it was stopped it will misbehave.

But I was wondering if we could still keep the channel opened (or even the task alive). As for the channel, right now it is closed to stop the receivers, I think, so I wonder if this is something we want to do or not.

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

Oh, I guess in your use case you'll just create a new receiver for the streamer when the current receiver stops? If so I can see how it is simpler to use than recreating the whole streamer.

Copy link
Contributor

@llucax llucax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would update the PR description and commit message to something like "Allow (re)starting a stopped streamer" to make it less confusing, other than that I'm fine with merging this as is and leave other ideas for future improvements.

@Marenz
Copy link
Contributor Author

Marenz commented Sep 26, 2024

I mean, i can also recreate the whole instance, sure frequenz-floss/frequenz-client-dispatch-python#94

@llucax
Copy link
Contributor

llucax commented Sep 26, 2024

I'm also fine with both, I wanted to understand the use case better. Both PRs are approved, I leave it to you to chose what to use.

@Marenz Marenz closed this Sep 26, 2024
@Marenz Marenz deleted the restart-stream branch September 26, 2024 15:31
@Marenz Marenz restored the restart-stream branch September 30, 2024 08:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
part:code Affects the code in general part:docs Affects the documentation part:tests Affects the unit, integration and performance (benchmarks) tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants