Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ActorTaskScheduler refactoring with benchmark #1484

Merged
merged 1 commit into from
Dec 4, 2015

Conversation

JeffCyr
Copy link
Contributor

@JeffCyr JeffCyr commented Dec 1, 2015

This pull request is a refactoring to fix the issues discussed in issue #1481. The major change with the old design is that each actor now have its own (lazy loaded) ActorTaskScheduler. CallContext is no longer required to flow the state so it should achieve better performance.

For the full comment history see Pull Request #1482

@Aaronontheweb
Copy link
Member

Re-running the build

@JeffCyr JeffCyr force-pushed the ActorTaskScheduler-refactoring branch from 115498d to be0db4a Compare December 2, 2015 10:34
@JeffCyr
Copy link
Contributor Author

JeffCyr commented Dec 2, 2015

I updated this PR according to the discussion about the LongRunning flag in PR #1470.

@Zetanova
Copy link
Contributor

Zetanova commented Dec 2, 2015

Looks good.

There is no point to have two different behaivios in the execution for AsyncTask().Wait() and RunTask(AsyncTask). If someone want to enforce single-threading then there should be a possibility with an explicit method like AsyncTaskSchudeler.RunTaskSync(AsyncTask)

The problem with a single-threaded implementation is that there are many deathlock problems like
RunTask(() => { AsyncTask().Wait(); }); would allready deathlock. Sometimes it is inside a 3th libary.

The LongRunning behaivior should be equal to Task.Factory.StartNew() with and without
running it inside of AsyncTaskSchudeler.RunTask()
Sometimes a 3th libary lazy loads an internal message-thread with the LongRunning Flag
it has nothing to do with Akka-Actor. It just wants a non-workpool-thread for long-running.

@Zetanova
Copy link
Contributor

Zetanova commented Dec 2, 2015

Maybe to queue all task of ONE RunTask in the ActorTaskSchudeler with the help of a LinkList and execute them in sequence over a worker-thread that holds the ActorContext. Then there is the possibility to implement TryDequeue() and GetScheduledTasks()

The count of the worker threads for normal execution should be the same as the normal TaskPoolSchdeduler and for single-thread-execution RunTaskSync() only ONE worker

If the first-task resumes the mailbox there would be a fine position to generate an error or warning if GetScheduledTasks() is not empty. It would mean that there are still free-floating work-pool-task around.

@JeffCyr
Copy link
Contributor Author

JeffCyr commented Dec 2, 2015

@Zetanova
This is a ActorTaskScheduler not a general purpose scheduler like TaskScheduler.Default.

ActorTaskScheduler.RunTask(Func<Task> asyncAction) can only be called within the context of an actor, message processing for that actor will be suspended until asyncAction completes and meanwhile all tasks scheduled in that TaskScheduler will be pushed on the captured actor's mailbox and executed single-threaded.

So yeah you can deadlock if you block inside the TaskScheduler, but that's by design, the same goes for any single-threaded TaskScheduler like the UI thread with TaskScheduler.FromCurrentSynchronizationContext().

I will create an AsyncActorSample to clarify the usage, but the basic scenario is to support async receives in an actor like this:

public class MyActor : ReceiveActor
{
    public MyActor()
    {
        Receive<SomeMessage>(async m =>
        {
            //Do something
            ...

            // This will release the current thread but message processing is still suspended
            await SomethingAsync(); 

            //Do something else (here we are back in the single-threaded context of the actor)
            ...
        });
    }
}

The purpose of an async Receive is to perform IO operation without blocking a ThreadPool thread. If we performed blocking IO in a normal Receive, we would potentially starve the ThreadPool and have bad performance.

It's true that some third party library may use Task.Factory.StartNew and Task.ContinueWith without specifying TaskScheduler.Default or awaiting a task without ConfigureAwait(false), so they end up executing arbitrary operation (possibly LongRunning) on TaskScheduler.Current which is bad if its an ActorTaskScheduler. This would be a bug in the third-party library and should be reported and fixed.

If you think that an async call might not follow good practice and hijack the ActorTaskScheduler, you can still await Task.Run(SomethingAsync) and you will be safe.

@rogeralsing
Copy link
Contributor

This is perf tested and solid I'm pulling this.

rogeralsing added a commit that referenced this pull request Dec 4, 2015
ActorTaskScheduler refactoring with benchmark
@rogeralsing rogeralsing merged commit e724d60 into akkadotnet:dev Dec 4, 2015
@stefansedich
Copy link
Contributor

As awesome!

I will test this to see if it fixes the mono issues I was having with
Context and async tomorrow!

On Thu, Dec 3, 2015, 10:01 PM Roger Johansson notifications@github.com
wrote:

Merged #1484 #1484.


Reply to this email directly or view it on GitHub
#1484 (comment).

@JeffCyr
Copy link
Contributor Author

JeffCyr commented Dec 4, 2015

Great!

@rogeralsing See my last comment in PR #1347, most usage of Task.Factory.StartNew and Task.ContinueWith in Akka should explicitly use TaskScheduler.Default, otherwise some continuations might get scheduled back in the calling actor context. The same is true for async methods that await without Task.ConfigureAwait(false).

Aaronontheweb added a commit to Aaronontheweb/akka.net that referenced this pull request Dec 5, 2015
…duler-refactoring"

This reverts commit e724d60, reversing
changes made to eb2067d.
@JeffCyr JeffCyr deleted the ActorTaskScheduler-refactoring branch February 9, 2016 03:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants